Antijargon Project

post by jefftk (jkaufman) · 2013-05-05T17:26:33.879Z · LW · GW · Legacy · 29 comments

Contents

29 comments

When a group of people talk to each other a lot they develop terms that they can use in place of larger concepts. This makes it easier to talk to people inside the group, but then it's harder to talk about the same ideas with people outside the group. If we were smart enough to keep up fully independent vocabularies where we would always use the right words for the people we were talking to, this wouldn't be an issue. But instead we get in the habit of saying weird words, and then when we want to talk to people who don't know those words we either struggle to find words they know or waste a lot of time introducing words. Especially when the group jargon term offers only a minor advantage over the non-jargon phrasing I think this is a bad tradeoff if you also want to speak to people outside the group.

Recently I've been working on using as little jargon as possible. Pushing myself to speak conventionally, even when among people who would understand weird terms a little faster, can be frustrating, but I think I'm also getting better at it.

 

I also posted this on my blog

29 comments

Comments sorted by top scores.

comment by fubarobfusco · 2013-05-05T22:28:45.657Z · LW(p) · GW(p)

Coining new jargon words (neologisms) is an alternative to formulating unusually precise meanings of commonly-heard words when one needs to convey a specific meaning.

A Marxist may use the term "surplus value" to specifically mean the difference between a worker's productivity and wage. If they say "surplus value" to someone who does not recognize this specific meaning, that person may think the Marxist means "surplus" in the sense of "unnecessary excess". They may think the Marxist means that the worker's productivity is wasted, and respond accordingly. This may baffle the Marxist, who will point out that "surplus value" (in their sense) doesn't have much to do with "overproduction" (another word that has a specific meaning in Marxist economics).

Using neologisms has the advantage that it conveys readily, to someone unfamiliar with the neologism, that they are unfamiliar with it and need to ask for clarification. Using existing words with unusually precise meanings runs the risk of letting someone go past a misunderstood word without realizing that they are doing so.

Replies from: Emile, army1987
comment by Emile · 2013-05-06T08:32:45.384Z · LW(p) · GW(p)

I agree that the distinction between neologisms and overloading existing words is an important one (and your examples are good!) - but I think the ordinary understanding of "jargon" covers both.

If someone announces "I'm going to stop using jargon!", and goes on to say things like "steel man", "shut up and multiply", "dark arts", then most people will agree he failed. The list of LessWrong Jargon contains plenty of non-neologisms like that.

Neologisms are a bit more obvious, but even the distinction between somewhat rare words (like "neologism") and specialized jargon (like "overloading") is pretty fuzzy.

Replies from: army1987, Dan_Moore
comment by A1987dM (army1987) · 2013-05-06T20:15:36.169Z · LW(p) · GW(p)

But no-one's going to assume that “steel man” refers to a man made of steel, or “dark arts” to arts of a dark colour, so they do qualify as neologisms in fubarobfusco's sense. (OTOH, I do seem to recall someone on LW or OB who had assumed that “shut up and multiply” was an exhortation to have lots of children, and went WTF.)

Replies from: fubarobfusco
comment by fubarobfusco · 2013-05-07T01:47:36.126Z · LW(p) · GW(p)

A lot of the LW sense of "dark arts" could be found in the mainstream expression "dirty tricks", which is slightly more general but not much: "cognitive dirty tricks" would be pretty clear. A significant part of both terms' meaning is that using the techniques so named is unethical or unfair on account of being manipulative of others.

(OTOH, I do seem to recall someone on LW or OB who had assumed that “shut up and multiply” was an exhortation to have lots of children, and went WTF.)

I don't recall this incident, but if a newcomer came across an evolutionary psychology discussion and saw that expression, that would be the obvious interpretation!

comment by Dan_Moore · 2013-05-09T15:14:45.881Z · LW(p) · GW(p)

The list of LessWrong Jargon contains plenty of non-neologisms like that.

ADBOC?

comment by A1987dM (army1987) · 2013-05-06T20:20:11.648Z · LW(p) · GW(p)

I don't think jkaufman meant we should use familiar-sounding words with unfamiliar overly precise meanings, but rather that we shouldn't get in the habit of using unfamiliar overly precise concepts even when we don't really need to (“unfamiliar” here meaning ‘unfamiliar to most audiences’, not ‘unfamiliar to the speaker’, of course).

comment by James_Miller · 2013-05-05T18:32:22.955Z · LW(p) · GW(p)

Part of being an effective communicator is optimizing what you say for your audience. You shouldn't take pride in not trying to do this. Train your brain to make optimal use of jargon given your audience, not to minimize your use of jargon.

New college professors often have trouble teaching "down" to the level of their students, but the solution for them is not to lower the complexity of their conversations with everyone, but rather to train their brains to respond differently when talking to students as opposed to colleagues.

Replies from: gjm, lucidian
comment by gjm · 2013-05-06T00:09:20.951Z · LW(p) · GW(p)

This seems nonresponsive to jkaufman's stated reason for trying to minimize jargon instead of using it optimally, namely this:

If we were smart enough to keep up fully independent vocabularies where we would always use the right words for the people we were talking to, this wouldn't be an issue. But instead we get in the habit of saying weird words, and then when we want to talk to people who don't know those words we either struggle to find words they know or waste a lot of time introducing words.

comment by lucidian · 2013-05-05T18:54:05.552Z · LW(p) · GW(p)

I agree with you that it's useful to optimize communication strategies for your audience. However, I don't think that always results in using shared jargon. Deliberately avoiding jargon can presumably provide new perspectives, or clarify issues and definitions in much the way that a rationalist taboo would.

Replies from: James_Miller
comment by James_Miller · 2013-05-05T19:46:22.418Z · LW(p) · GW(p)

But good jargon reduces the time it takes to communicate ideas and so allows for more time to gain new perspectives.

Replies from: lucidian
comment by lucidian · 2013-05-05T19:52:45.831Z · LW(p) · GW(p)

Unless the jargon perpetuates a false dichotomy, or otherwise obscures relevant content. In politics, those who think in terms of a black-and-white distinction between liberal and conservative may have a hard time understanding positions that fall in the middle (or defy the spectrum altogether). Or, on LessWrong, people often employ social-status-based explanations. We all have the jargon for that, so it's easy to think about and communicate, but focusing on status-motivations obscures people's other motivations.

(I was going to explain this in terms of dimensionality reduction, but then I thought better of using potentially-obscure machine learning jargon. =) )

comment by Ratcourse · 2013-05-05T18:02:34.829Z · LW(p) · GW(p)

Maybe this should be in the Open Thread?

Nonetheless, I feel that if you can't explain it without using jargon, that gives some evidence for you not understanding it in the first place (whatever it is).

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-05-06T08:38:24.323Z · LW(p) · GW(p)

I feel that if you can't explain it without using jargon, that gives some evidence for you not understanding it in the first place (whatever it is).

Eh. Maybe if you're using it in a guessing-the-teacher's-password kind of way, but sometimes you need jargon because you need to say something very precise (e.g. in mathematics).

comment by lucidian · 2013-05-05T18:30:54.422Z · LW(p) · GW(p)

This is very related to something my friend pointed out a couple weeks ago. Jargon doesn't just make us less able to communicate with people from outside groups - it makes us less willing to communicate with them.

As truth-seeking rationalists, we should be interested in communicating with people who make good arguments, consider points carefully, etc. But I think we often judge someone's rationality based on jargon instead of the content of their message. If someone uses a lot of LessWrong jargon, it gives a prior that they are rational, which may bias us in favor of their arguments. If someone doesn't use any LW jargon (or worse, uses jargon from some other unrelated community), then it might give a prior that they're irrational, or won't have acquired the background concepts necessary for rational discussion. Then we'll be biased against their arguments. This contributes to LW becoming a filter bubble.

I think this is a very important bias to combat. Shared jargon reflects a shared conceptual system, and our conceptual systems constrain the sort of ideas that we can come up with. One of the best ways to get new ideas is to try understanding a different worldview, with a different collection of concepts and jargon. That worldview might be full of incorrect ideas, but it still broadens the range of ideas you can think about.

So, thanks for this post. =) I hope you will discuss the results of your attempt to speak without jargon.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-05-05T20:42:43.789Z · LW(p) · GW(p)

That's not what prior means. You mean evidence.

Replies from: lucidian
comment by lucidian · 2013-05-05T21:18:54.047Z · LW(p) · GW(p)

Hmm, you're probably right. I guess I was thinking that quick heuristics (vocabulary choice, spelling ability, etc.) form a prior when you are evaluating the actual quality of the argument based on its contents, but evidence might be a better word.

Where is the line drawn between evidence and prior? If I'm evaluating a person's argument, and I know that he's made bad arguments in the past, is that knowledge prior or evidence?

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-05-05T22:13:55.980Z · LW(p) · GW(p)

Where that goes depends on whether you're evaluating "He's right" or "This argument is right".

comment by Qiaochu_Yuan · 2013-05-06T08:40:16.856Z · LW(p) · GW(p)

Examples? I'm not sure I understand what sort of jargon you particularly want to cut. The kind of jargon I use in my day-to-day work (mathematics) is more or less indispensable.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2013-05-06T13:24:11.519Z · LW(p) · GW(p)

In a lesswrong context this would be avoiding saying things like "that's inconsistent with my model of you", "I'll need to update on that", or "that charity is nearly donkey-sanctuary in it's fuzzies to utilons ratio".

comment by MileyCyrus · 2013-05-05T22:48:31.647Z · LW(p) · GW(p)

I've found "machine problem-solving" goes over with laypeople better than "machine intelligence."

Replies from: army1987
comment by A1987dM (army1987) · 2013-05-06T20:24:38.574Z · LW(p) · GW(p)

The former suggests narrow intelligence to me, whereas the latter is more neutral (whereas “artificial intelligence” suggests general intelligence to me).

comment by Kawoomba · 2013-05-05T17:44:50.554Z · LW(p) · GW(p)

10-4

Replies from: DaFranker
comment by DaFranker · 2013-05-06T20:24:08.928Z · LW(p) · GW(p)

ISWYDT v1

Got Milk?

comment by RHollerith (rhollerith_dot_com) · 2013-05-08T01:56:07.866Z · LW(p) · GW(p)

I cringe a little every time I see someone here write, "Suppose Omega told you X," when, "Suppose X," works just as well.

Replies from: TimS
comment by TimS · 2013-05-08T02:05:25.453Z · LW(p) · GW(p)

I seen your point, but the phrasing you object to tends to reduce the frequency of responders to fight the hypothetical. At least, in theory.

comment by buybuydandavis · 2013-05-07T03:52:39.007Z · LW(p) · GW(p)

Jargon and terms of art have their uses and abuses. Clearly, it's very handy to have short references to complex concepts to communicate more information faster.

Unfortunately, it's also a tool of control and status, used to exclude and pretend.

For me, I tend to hate jargon. Only so many concepts fit in my head at once. That' one of the reasons for my preference for Jaynes. What is the probability of X? That works much better for me than the endless sea of special purpose names and concepts in conventional statistics.

comment by DaFranker · 2013-05-06T20:19:29.700Z · LW(p) · GW(p)

Slightly side-tracked:

I had several objections to this, and then did some standard debiasing and came up with an obvious-in-retrospect obvious solution within less than five minutes, here to anchor your judgment for my dark-artsy pleasures! Unfortunately, the original idea I had relies on technologies and their widespread use that are barely being hinted at by obscure high-tech lab projects like Steve Mann's EyeTap and Google's Glass, so here's the toned-down version that at least fixes some of the issues for internet discussion boards. [1]

The obvious solution I mention is to write a (browser) script that maintains a database of jargon terms or keywords or unconventional definitions or abstract concepts, along with links to places that explain them, and an easy and convenient way to add new jargon to it. This script would unobtrusively (based on my scripting experience, this "unobtrusive" part and the "easy to add new jargon" are probably the two tallest orders and most difficult parts of such a project) suggest linking / referencing (or perhaps also allow for one-click substitution / inserting an explanation of the concept) whenever it detects the keywords.

The base concept would be that it work like the automatic spell-checking dictionaries (e.g. the one integrated in most versions of Firefox), but instead of suggesting corrections to common words, it would suggest links and references for specialized jargon and obscure terms.

1: (The original idea I had involved automatic personal databases and inter-device communication that compared those databases and offered automatic substitutions or transmitted link references when there were mismatches between two users' data, so that you'd just keep on using jargon and terms that you understand differently from your audience would be automatically (or by suggestion) adjusted for possible misunderstandings or have explanatory notes / links to references attached to them, even during normal in-person conversation. I take my living-in-the-future ideals very seriously. )

comment by Viliam_Bur · 2013-05-11T13:30:43.452Z · LW(p) · GW(p)

How about discussing the jargon piece by piece? Some words could perhaps be replaced by already existing words with the same meaning but larger audience. Other words could remain if we feel they add enough value.

comment by Littlest · 2013-05-08T14:19:10.924Z · LW(p) · GW(p)

This seems like a great way to keep from accidentally alienating new members to a group.