The Pink Sparkly Ball Thing (Use unique, non-obvious terms for nuanced concepts)
post by MalcolmOcean (malcolmocean) · 2016-02-20T23:25:16.034Z · LW · GW · Legacy · 14 commentsContents
Jargon vs in-jokes None 14 comments
Naming things! Naming things is hard. It's been claimed that it's one of the hardest parts of computer science. Now, this might sound surprising, but one of my favoritely named concepts is Kahneman's System 1 and System 2.
I want you to pause for a few seconds and consider what comes to mind when you read just the bolded phrase above.
Got it?
If you're familiar with the concepts of S1 and S2, then you probably have a pretty rich sense of what I'm talking about. Or perhaps you have a partial notion: "I think it was about..." or something. If you've never been exposed to the concept, then you probably have no idea.
Now, Kahneman could have reasonably named these systems lots of other things, like "emotional cognition" and "rational cognition"... or "fast, automatic thinking" and "slow, deliberate thinking". But now imagine that it had been "emotional and rational cognition" that Kahneman had written about, and the effect on the earlier paragraph.
It would be about the same for those who had studied it in depth, but now those who had heard about it briefly (or maybe at one point knew about the concepts) would be reminded of that one particular contrast between S1 and S2 (emotion/reason) and be primed to think that was the main one, forgetting about all of the other parameters that that distinction seeks to describe. Those who had never heard of Kahneman's research might assume that they basically knew what the terms were about, because they already have a sense of what emotion and reason are.
This is related to a concept known as overshadowing, when a verbal description of a scene can cause eyewitnesses to misremember the details of the scene. Words can disrupt lots of other things too, including our ability to think clearly about concepts.
An example of this in action is Ask and Guess Culture model (and later Tell, and Reveal). People who are trying to use the models become hugely distracted by the particular names of the entities in the model, which only have a rough bearing on the nuanced elements of these cultures. Even after thinking about this a ton myself, I still found myself accidentally assuming that questions an Ask Culture thing.
So "System 1" and "System 2" have several advantages:
- they don't immediately and easily seem like you already understand them if you haven't been exposed to that particular source
- they don't overshadow people who do know them into assuming that the names contain the most important features
Another example that I think is decent (though not as clean as S1/S2) is Scott Alexander's use of Red Tribe and Blue Tribe to refer to culture clusters that roughly correspond to right and left political leanings in the USA. (For readers in most other countries: the US has their colors backwards... blue is left wing and red is right wing.) The colors make it reasonably easy to associate and remember, but unless you've read the post (or talked with someone who has) you won't necessarily know the jargon.
Jargon vs in-jokes
All of the examples I've listed above are essentially jargon—terminology that isn't available to the general public. I'm generally in favour of jargon! If you want to precisely and concisely convey a concept that doesn't already have its own word, then you have two options.
"Coining new jargon words (neologisms) is an alternative to formulating unusually precise meanings of commonly-heard words when one needs to convey a specific meaning." — fubarobfusco on a LW thread
Doing the latter is often safe when you're in a technical context. "Energy" is a colloquial term, but it also has a precise technical meaning. Since in technical contexts, people will tend assume that all such terms have technical meanings (or even learn said meanings early on) there is little risk of confusion here. Usually.
I'm going to make a case that it's worth treating nuanced concepts like in-jokes: don't make the meaning feel like it's in the term. Now, I'm not sold that this is a good idea all the time, but it seems to have some merit to it. I'm interested in where it works and where it doesn't; don't take this article to suggest I think it's unilaterally good. Let's jam on where it's good.
Communication is built on shared understanding. Much of this comes from the commons: almost all of words you're reading in this blog posts are not words that you and I had to guarantee we both understood with each other, before I could write the post. Sometimes, blog posts (or books, lectures, etc) will contain definitions, or will try to triangulate a concept with examples. The author hopes that the reader will indeed have a similar handle on the word they're using after reading the definition. (The reader may not, of course. Also they they might think they do. Or be confused.)
When you have the chance to interact with someone in real-time, 1-on-1, you can often gauge their understanding because they'll try to paraphrase the thing, and you can usually tell if the thing that they say is the kind of thing someone who understood would say. This is great, because then you can feel confident that you can use that concept as a building block in explaining further concepts.
One common failure mode of communication is when people assume that they're using the same building blocks as each other, when in fact, they're using importantly different concepts. The is the issue that rationalist taboo is designed to combat: forbid use of a confounding word and force the conversationalists to build the concept up from component parts again.
Another way to reduce the occurrence of this sort of thing is to use jargon and in-jokes, because then the person is going to draw a blank if they don't already have the shared understanding. Since you had to be there, and if you weren't, something key is obviously missing.
I once had a long conversation with someone, and we ended up using a lot of the objects we had with us as props when explaining certain concepts. This had the curious effect that if we wanted to reference our shared understanding of the earlier concept, we could refer to the object and it became really clear that it was our shared understanding we were referencing, not some more general thing. So I could say "the banana thing" to refer to him having explored the notion that evilness is a property of the map, not the territory, by remarking that a banana can't be evil but that we can think it evil.
The important thing here is that it felt like it was easier to point clearly at that topic by saying "the banana thing", because we both knew what that was and didn't need to accidentally overshadow it, by saying "the objects aren't evil thing" which might eventually get turned into a catchphrase that seems to contain meaning but never actually contained the critical insight.
This prompted me to think that it might be valuable to buy a bunch of toys from a thrift store, and to keep them at hand when hanging out with a particular person or small group. When you have a concept to explore, you'd grab an unused toy that seemed to suit it decently well, and then you'd gesture with it while explaining the concept. Then later you could refer to "the pink sparkly ball thing" or simply "this thing" while gesturing at the ball. Possibly, the other person wouldn't remember, or not immediately. But if they did, you could be much more confident that you were on the same page. It's a kind of shared mnemonic handle.
In some ways, this is already a natural part of human communication: I recall years ago talking to a friend and saying "oh, it's like the thing we talked about on my porch last summer" and she immediately knew what I meant. I'm basically proposing to take it further, by using props or by inventing new words.
Unfortunately, terms often end up losing their nuance, for various reasons. Sometimes this happens because the small concept they were trying to point at happens to be surrounded by a vacuum, so it expands. Other times because of shibboleths and people wanting to use in-group words. Or the words are used playfully and poetically, for humor purposes, which then makes it less clear that they once had a precise meaning.
This suggests there might be a kind of terminological inflation thing going on. And to the extent that signalling by using jargon is anti-inductive, that'll dilute things too.
I think if you're trying to think complex thoughts, it's worth developing specialized language, not just with groups of people, but even in 1-on-1 contexts. Of course, pay attention so you don't use terms with people who totally don't know them.
And this, this developing of shared language beyond what's strictly necessary but still worthwhile... this, perhaps, we might call the pink sparkly ball thing.
(this article crossposted from malcolmocean.com)
14 comments
Comments sorted by top scores.
comment by Unnamed · 2016-02-21T06:47:17.215Z · LW(p) · GW(p)
Coincidentally, Scott Alexander just wrote a post with nonfiction writing advice which includes:
9. Use strong concept handles
The idea of concept-handles is itself a concept-handle; it means a catchy phrase that sums up a complex topic.
Eliezer Yudkowsky is really good at this. “belief in belief“, “semantic stopsigns“, “applause lights“, “Pascal’s mugging“, “adaptation-executors vs. fitness-maximizers“, “reversed stupidity vs. intelligence“, “joy in the merely real” – all of these are interesting ideas, but more important they’re interesting ideas with short catchy names that everybody knows, so we can talk about them easily.
I have very consciously tried to emulate that when talking about ideas like trivial inconveniences, meta-contrarianism, toxoplasma, and Moloch.
I would go even further and say that this is one of the most important things a blog like this can do. I’m not too likely to discover some entirely new social phenomenon that nobody’s ever thought about before. But there are a lot of things people have vague nebulous ideas about that they can’t quite put into words. Changing those into crystal-clear ideas they can manipulate and discuss with others is a big deal.
If you figure out something interesting and very briefly cram it into somebody else’s head, don’t waste that! Give it a nice concept-handle so that they’ll remember it and be able to use it to solve other problems!
I'll add that memorable, idea-crystallizing labels can also be useful for your own thinking, even if you only use them in your own head. Instead of thinking "I'm doing that thing, I should do that other thing instead" or "I'm doing that thing where [20-word description], better switch to [12-word description]" you tell yourself (e.g.) "That feels like doublethink, time to singlethink."
comment by mako yass (MakoYass) · 2016-02-21T06:29:19.804Z · LW(p) · GW(p)
While I'll agree that the System X naming scheme does extraordinarily well at avoiding muddying the underlying definition with colloquial, poetic, or aesthetic baggage, I'm fucking astonished to see someone advocating it in a community that tends to take its cues from computer science. My kin, it's like calling a new datatype Object1. You don't do that. It's the most generic, meaningless, unmemorable conceivable name. The only name more generic would be "Thing", and System isn't much better than Thing, essentially meaning "Group of connected things"(a set which contains almost every class of thing aside from, maybe, Subatomic Particle. For now. (right? I'm not a physicist. I feel like I might be wrong about that.)).
I think the best way forward is to establish norms that encourage the creation of totally new words, portmantues or well abreviated compound words. For instance, a friend of mine came up with a theory he called Complex Patternism. We'd both read the right kind of science fiction, so he didn't have any objections to changing the name to Compat. This saved a lot of typing over the next few months. If you knew the original phrase, you would recognize the contraction. If you didn't, you would have to ask for a definition- people wouldn't bring any of their own baggage about the words "complex" or "patternism" along. It's kind of like an acronym, only pronounceable, and when we realized precepts of Patternism wern't really necessary for the theory to work, the original etymology fell away, it was still a lot better than an acronym would have been. It had become a word with no baggage at all.
So yeah, I'm a big advocate of portmantues. Compose them of highly abbreviated, vague atoms and you can take them a long way from their original meaning if you ever need to.
Replies from: malcolmocean↑ comment by MalcolmOcean (malcolmocean) · 2016-02-21T10:06:34.094Z · LW(p) · GW(p)
Ah, true. Yeah, as much as I like S1 and S2, I think they might be pretty annoying if we used them for lots of things. Or... maybe not! I easily track the 5 Kegan levels, the 9 personality types in the enneagram, and various other numbered things and only occasionally, briefly, become confused. I think these benefit of low-overshadowing is pretty good.
I like compat.
comment by MarsColony_in10years · 2016-02-21T06:33:06.354Z · LW(p) · GW(p)
I've always hated jargon, and this piece did a good job of convincing me of its necessity. I plan to add a lot of jargon to an Anki deck, to avoid hand-waving at big concepts quite so much.
However, there are still some pretty big drawbacks in certain circumstances. A recent Slate Star Codex comment expressed it better than I ever have:
Replies from: malcolmoceanOne cautionary note about “Use strong concept handles”: This leans very close to coining new terms, and that can cause problems.
Dr. K. Eric Drexler coined quite a few of them while arguing for the feasibility of atomically precise fabrication (aka nanotechnology): “exoergic”, “eutactic”, “machine phase”, and I think that contributed to his difficulties.
If a newly coined term spreads widely, great! Yes it will an aid to clarity of discussion. If it spreads throughout one group, but not widely, then it becomes an in-group marker. To the extent that it marks group boundaries, it then becomes yet another bone of contention. If it is only noticed and used within a very small group, then it becomes something like project-specific jargon – cryptic to anyone outside a very narrow group (even to the equivalent of adjacent departments), and can wind up impeding communications.
↑ comment by MalcolmOcean (malcolmocean) · 2016-02-21T10:02:26.192Z · LW(p) · GW(p)
"I've always hated jargon, and this piece did a good job of convincing me of its necessity."
:)
Feels good to change a mind. I'm curious if there were any parts of the post in particular that connected for you.
Replies from: MarsColony_in10years↑ comment by MarsColony_in10years · 2016-02-23T04:06:54.488Z · LW(p) · GW(p)
Although compressing a complex concept down to a short term obviously isn't lossless compression, I hadn't considered how confusing the illusion of transparency might be. I would have strongly preferred that "Thinking Fast and Slow" continue to use the words "fast" and "slow". As such, these were quite novel points:
they don't immediately and easily seem like you already understand them if you haven't been exposed to that particular source
they don't overshadow people who do know them into assuming that the names contain the most important features
The notion of using various examples to "triangulate" a precise meaning was also a new concept to me too. It calls to mind the image of a Venn diagram with 3 circles, each representing an example. I don't think I have mental models for several aspects of learning. Gwern's write up on spaced repetition gave me an understanding about how memorization works, but it hadn't occurred to me that I had a similar gap in my model (or lack thereof) for how understanding works.
(I'm not sure the triangulation metaphor lends much additional predictive power. However, an explicit model is a step up from a vague notion that it's useful to have more examples with more variety.)
comment by mako yass (MakoYass) · 2016-02-21T07:10:00.025Z · LW(p) · GW(p)
Hofstadter's term Superrationality is a hell of an example of a word that's totally useless due to the way it violates this principle. If someone who doesn't know the official definition hears you using the term Superrationality, they'll probably imagine that it's just some kind of very strong form of rationality as they already know it, despite the fact that it is very much not rationality as they already know it. That unaired disagreement will quickly poison any discussion that invokes the term.
comment by buybuydandavis · 2016-02-21T00:03:52.553Z · LW(p) · GW(p)
Korzybski explained his extensive creation of jargon in General Semantics similarly. (My recollection.)
Since his project was cleaning up semantic confusions, he argued that it was better to create new jargon than to use existing words with their existing associated semantic confusions.
The other side of that pancake is that your jargon makes it much harder for outsiders to interact with your system, either consuming it, or critiquing it. Both from the outside and the inside, it's looks like a cult with their own special truths only available to the initiated.
Replies from: Viliam↑ comment by Viliam · 2016-02-22T09:57:03.300Z · LW(p) · GW(p)
The other side of that pancake is that your jargon makes it much harder for outsiders to interact with your system, either consuming it, or critiquing it. Both from the outside and the inside, it's looks like a cult with their own special truths only available to the initiated.
Yes. (Connotational disclaimer: blindly reverting cultishness is also not a healthy way of living.)
Creating new words comes with a price. Sometimes paying the price is worth it, sometimes it is not. The price is cognitive (having more things to remember) and social (increasing the communicational barrier between you and people who don't speak your jargon). In return you may get an ability to think better about some aspect of the world (but you are quite likely to hugely overestimate the benefits).
The cults waste cognitive resources of their members and separate them from the environment for no good reasons. Well, no good-for-the-members reasons; the benefits for the cult itself are obvious: providing fake value (usually in exchange for real resources), and retaining the members longer by isolating them socially.
But a lot of education is also based on this. You learn words like "vector" because it is easier than saying the full description everytime you want to do something with vectors; and there are many things you can do with vectors. Same for "noun", "adjective" and "verb"; or "atom" and "molecule"; or many other words you learn at school.
Creating words is a powerful human tool, and it can also be used in many wrong ways, by stupidity or malice. Concepts that don't correspond to real things. Concepts that feel like they provide a deep insight, but in fact they only unnecessarily polute the dictionary. Concepts invented for political reasons (cutting the thingspace according to a political dogma, regardless of the natural structure of the territory), or for status reasons (concept-coining is high-status). A frequent sin is creating new words for something that already has a name (because of ignorance, or status reasons).
Be aware of these biases, especially if you find yourself creating too many concepts.
comment by Gunnar_Zarncke · 2016-02-21T09:16:36.381Z · LW(p) · GW(p)
See also the 17 Rules to Make a Definition or EYs earlier 37 Ways That Words Can Be Wrong.
comment by ChristianKl · 2016-02-21T10:30:29.125Z · LW(p) · GW(p)
I don't believe that system 1 and system 2 are good names. I do understand the motivation of using names that aren't already mapped with meaning but there are other ways.
You can use Greek and Latin syllable to make up new terms.
I recently learned about Piaget's stages of learning: Assimiliation, akkommodation and equilibration. Those words don't really create conflicts with existing concepts. At the same time the names make more than than a numeric ordering.
Philosophers have already named many concepts. Often it's better to use on of the existing names than to make up a new one.
comment by cousin_it · 2016-02-25T00:28:55.646Z · LW(p) · GW(p)
I think it's okay to invent technical terms with precise meanings, like "Prisoner's Dilemma", though I try to be careful even then. But using snappy names for vague mental and social concepts often feels shady to me, because it can come across as an almost cult-like overconfidence.
comment by Gleb_Tsipursky · 2016-02-21T04:21:51.127Z · LW(p) · GW(p)
It seems like naming things this way can serve some purposes, but not others. In which contexts do you see it being beneficial, and in which not?