No, Newspeak Won’t Make You Stupid

post by MikkW (mikkel-wilson) · 2020-12-18T00:56:02.654Z · LW · GW · 38 comments

Contents

  Cadence of Information is Constant
  Do We Need All These Words?
  If Someone Wants to Force You to Speak Newspeak, Run Away
    Footnotes
None
39 comments

In George Orwell's book 1984, he describes a totalitarian society that, among other initiatives to suppress the population, implements "Newspeak", a heavily simplified version of the English language, designed with the stated intent of limiting the citizens' capacity to think for themselves; everybody knows that when you have a thinking people, keeping a peoplegroup still and not angry is unpossible.

In short, the ethos of newspeak can be summarized as: "Minimize vocabulary to minimize range of thought and expression". There’s no way such a simple idea could mean different things to different people, right? Well… there are two different, closely related, ideas, both of which the book implies, that are worth separating here.

The first (which I think is to some extent reasonable) is that by removing certain words from the language, which serve as effective handles for pro-democracy, pro-free-speech, pro-market concepts, the regime makes it harder to communicate and verbally think about such ideas… Although, if that was the only thing done by Orwell’s Oceania, it would work about as well as taking a sharp knife away from a toddler, while still leaving on the ground next to them a fully-loaded AK-47; people are adept at making themselves understood, even in the face of constraints to communication

The second idea, which I worry is an incorrect takeaway people may get from 1984, is that by shortening the dictionary of vocabulary that people are encouraged to use (absent any particular bias towards removing handles for subversive ideas), one will reduce the intellectual capacity of people using that variant of the language. However, since that idea is false, that definitely, 100% clearly makes it perfectly okay for a government to force Newspeak on its people, and that totally wouldn’t be a creepy overstepping of its power (I know, Poe’s Law says that it is utterly impossible for me to be sarcastic on the internet without somebody thinking that I actually believe it).

Cadence of Information is Constant

If you listen to a native Chinese speaker, then compare the sound of their speech to a native Hawaiian speaker, there are many apparent differences in the sound of the two languages. Chinese has a rich phonological inventory containing 19 consonants, 5 vowels, and quite famously, 4 different tones (pitch patterns) which are used for each syllable, for a total of 5400 (approximately) possible syllables, including diphthongs and multi-syllabic vowels. On the other hand, the Hawaiians’ phonemes all fell off the catamaran during a catastrophic event at some point on the long trip from Eastasia to Hawaii, so they only have 8 consonants, and 5 vowels, and no tones. Including diphthongs, there are 200 possible Hawaiian syllables.

One might naïvely expect that Mandarin speakers can communicate information more quickly than Hawaiian speakers, at a rate of 12.4 bits / syllable vs. 7.6 bits / syllable - however, this is neglecting the speed at which syllables are spoken- Imagine two fountains, one which emits a large stream of water slowly, the other which spews a thin ribbon of water flying at the speed of a Plaid Model S outracing a Ferrari. Even though the first fountain (Mandarin) has a much larger stream (bits / syllable), the two fountains output the same amount of water, because the second fountain (Hawaiian) is so much faster (many syllables per second). For this reason, Hawaiian and Mandarin are much closer to each other in speed of communication than their phonologies would suggest. [1]

This is because, in general, cadence of information is constant. Within a given language, within the same dialect, sure, you will see that in different contexts, a speaker may speak at a higher or lower tempo. But within any given context, within the same species and across different languages, we should expect that people will only be able to process and comprehend so many bits per second, and this rate of bits per second will be more or less the same no matter where you go. Therefore, you can increase how much you communicate with each word, or with each syllable, but you can only do so by reducing the speed at which you pronounce these words.

Bits per second will always be the same, so the number of words / minutes will be inversely proportional to the number of bits communicated per word. You can't communicate faster or communicate more information by making your words more nuanced, since your audience only has so much processing power, so the cadence of the information you can communicate is constant.

Back to 1984. If we were to take out our scissors, and cut giant holes into the dictionary, so it became only 1/20th the size it is now (while steering clear of the thoughtpolice and any bias in removal of words), what should we expect will happen? One may naïvely think, that just as banning the words "democracy", "freedom", and "justice" would inhibit people's ability to think about Enlightenment Values, banning most of the words should inhibit our ability to think about most of the things.

But that is not what I would expect to see happen. One should expect to see compound words take the place of deprecated words, speaking speeds increased, and to accommodate the increased cadence of speech, tricky sequences of sounds will be elided (blurred / simplified), allowing for complex ideas to ultimately be communicated at a pace that rivals that of standard English.

Do We Need All These Words?

I recorded a version of the first section of Scott Alexander’s “Eight Short Studies On Excuses [LW · GW]” which uses only the 1,000 most common English words, as counted by the Up-Goer Five Text Editor [2], and I read it at a cadence that provides the same information density of the original in the same amount of time it took for me to read the original. I was underwhelmed by how it came out, since limiting to 1,000 words doesn’t actually change how it sounds in most parts, with just a few turns of phrase sticking out as not being normal English. If I had tried a similar exercise with maybe 200 or 500 well-chosen words, I think that would have illustrated my point better.

But even so, this version of Yvain’s work illustrates an important point: It’s much easier for a non-anglophone to learn 1,000 words than to learn the entirety of the English vocabulary, and after having done so, they would be able to understand or produce an equivalent text. In practice, I don’t think we need more than 1,000 words to clearly communicate (we can probably even go much lower - Sona is an artificial language that has no more than 375 root words; while it never saw serious use, my experience studying it suggests that 375 radicals is sufficient to accomplish a full language, and I suspect even that’s not the limit).

So why do we have so many more words than we need? I think the answer comes down to signaling: Being able to use a wide variety of words demonstrates that one has learned all the words they use (plus, it also implies that they know many words of a similar difficulty level which they haven't had a need to use), which signals that they have enough mental capacity in order to do so. Conversely, relying on a limited vocabulary, either in speaking or in the material one chooses to read / listen to, signals low intelligence (and I suspect from a historical perspective, also tags non-native/fluent speakers as not belonging to your tribe), so if someone wants to signal intelligence (something all humans are designed to do subconciously), then they will gravitate towards using a rich cornupia of words, and shy away from styles of speech that synergyze with a limited lexicon. But while this explains why languages tend to have large vocabularies, this doesn’t mean that from a language design stand-point that you actually want or need a large vocabulary to effectively communicate or think.

If Someone Wants to Force You to Speak Newspeak, Run Away

It should go without saying, we shouldn’t lobby for our government to shorten the vocabulary we’re allowed to use. While I maintain that nothing bad would happen as a direct result of restricting our vocabulary (setting aside the thoughtpolice…), let’s just say that if the government thinks implementing newspeak is in the Overton window, then we’ve got much bigger problems on our hands.

Villiam responded to a previous version of this post [LW(p) · GW(p)] saying: “How would you get stuff done if people won't join you because you suck at signaling?”, which I wholeheartedly agree with. Oftentimes, it’s important to signal desirable qualities, and intentionally using a version of your language that is hard to learn can be an effective way to signal that you are a useful person to ally with or listen to. So, when you need to signal that you are smart, I won’t implore you to use simplified English. But perhaps there’s room for a forum where a sprach with a small lexicon is the norm, especially if the users have some other way of knowing that their fellow users are intelligent.

There are a few big problems facing the world today - the biggest of them, of course, is that AI will have almost certainly killed every single human by the end of this decade, a problem that unfortunately I have little idea how to effectively address right now. But below that, somewhere in the top 5 problems the world faces, is a very big problem: Even in this extraordinarily connected era, there are large cultural barriers between the major world power’s populations, especially between the US, China, and Russia. Most Americans consume media that is originally written in English, most Chinese consume media that is originally written in Chinese, and most Russians consume media that was produced in Russian. If history is any guide (which it is), and if AI doesn’t kill us by then (which it probably will), then it’s reasonable to presume that this will indirectly lead to the death of a large portion of the world’s population. There’s a good chance that includes you and me, too.

It would be valuable to try to break down the cultural barriers that exist between the great powers of the world, especially among those citizens who are most predisposed to nationalism, and no forum that operates in a high-prestige variant of one of the great languages will ever be well-suited to that purpose. For this reason, I would like to see a forum, with similar features to our beloved LessWrong, but which operates on a dialect or language with a small lexicon, something closer to the 375 radicals of Sona than the 1,000 words used in Upgoer-Five. If the goal is to attract people from a wide variety of cultures, and especially those who are most at risk of nationalistic thought, I feel that it would be appropriate that this shouldn’t be a simplified version of a major language, but rather should be a neutral language, which would make Sona (or an improved version of Sona) an ideal lingua franca for such a site [3]. I think one way to counteract the lack of signalling afforded by vocabulary is to require something resembling an IQ test to be passed before being able to comment or post on the site (though anyone can view it without needing to take any test), and people who do especially well on it may get some special badge that makes their name stand out.

Setting all that aside, even more than any practical proposals, I think the most valuable thing you will get from having read this post is simply having the principles I laid out in the first parts of this post in the back of your mind, ready to match with some unanticipated stimulus sometime in the future, and get a correct answer as a result of having remembered what I wrote here.

Footnotes

[1] I found a figure of 5.18 syllables / second for Mandarin, and while I did not find a figure for Hawaiian, I found that Japanese, which has a similar, though slightly more complex, phonology to Hawaiian is 7.84 syllables per second. This is from a secondary source, and they did not provide a reference other than “a study by researchers at the Université de Lyon”, without naming the authors or paper, unfortunately. Multiplying these numbers together, assuming a similar cadence for Hawaiian as Japanese provides an estimate of 64.2 bits / second for Mandarin and 59.6 bits / s for Hawaiian. This is in line with the expectation I have from the previous paragraph, especially when you consider that Japanese is spoken slower than Hawaiian.

[2] There are some compound words that I used that the editor doesn’t like, but they are easily derivable from words the editor accepts, so I considered them fair game

[3] Esperanto is not well-suited to this. While Esperanto's lexicon is smaller than most natural languages, it is still quite large, and requires a sustained deliberate effort to learn. Esperanto is also too similar to certain languages it is based on, which makes it less culturally neutral than Sona

38 comments

Comments sorted by top scores.

comment by gjm · 2020-12-18T01:22:30.317Z · LW(p) · GW(p)

Although your post uses plenty of sophisticated words, the only part of it that I thought was mostly there to signal sophistication was the part where you said that the reason why people use sophisticated words is mostly to signal sophistication.

A perfectly sufficient explanation for a lot of use of sophisticated words, it seems to me, is the obvious naive one. Sometimes a fancier word expresses a useful idea clearly and concisely, and the alternative would be circumlocution; perhaps, as you suggest, just talking faster could make up for that, but (1) I am not convinced and (2) that might make a smaller language function OK, but no one is choosing between a smaller language spoken faster and a larger one spoken slower, they are choosing what to say on a given occasion and they will speak at about the same rate whether they're using fancy words or not. And sometimes a fancier word has a sound that's better for your purposes for some reason; for an extreme case, consider poetry, where poets will sometimes use quite obscure words because they need a particular metre or rhyme or other sonic effect. Of course some poets sometimes may be signalling sophistication too.

Replies from: Ericf, mikkel-wilson, mikkel-wilson
comment by Ericf · 2020-12-18T05:02:24.577Z · LW(p) · GW(p)

I do not agree. MIKKw used 3rd and 4th Ten Hundred words all over their writing. In many cases a 2nd Ten Hundred pick would do.

Note that I'm suggesting using 2nd Ten Hundred as the cut off, because "Thousand" is not one of the top ten hundred words. And many of the other words I use all the time aren't either.

I also do not agree with MIKKw. Sets of words, especially written ones, need big words in order to send big ideas in a short enough set of words to be understood. Human brains can only hold about 5 things in them at a time, so if one idea with fixed meaning needs 5 words just to say, it becomes not possible to hold a relationship between two ideas with fixed meaning in mind at the same time. One time this happened is writing this set of words. As hard as it was, I could not have done it without grouping sets of words into ideas with fixed meaning. If there wasn't already a big word for that idea with fixed meaning, I would want to make one. And, once I made it, I would share it, so others could be helped to think big thoughts too. And now we've found the start of why there are so many words.

Replies from: gjm
comment by gjm · 2020-12-18T09:46:29.333Z · LW(p) · GW(p)

I'm not sure what your first paragraph is disagreeing with me about. Specifically, when you say "In many cases a 2nd Ten Hundred pick would do", do you mean to imply that the reason for MikkW's choice was signalling? I don't see any particular reason to believe that. For any particular word choice, the actual explanation may simply be something like "well, there are five different words that would do and I picked one more or less at random", but of course that doesn't do much to explain how the language grows.

Your third paragraph appears to me to be making much the same point as I was: fancy words are useful for communication because clarity and conciseness matter.

One thing you make more explicit than I did, which may be worth making more explicit still: language isn't just for communication with other people, but also for thinking with, and thinking may be easier with a richer vocabulary. This is of course exactly the claim contradicted by MikkW's title, but I don't think the article does much to justify that contradiction: MikkW first claims that "cadence of information is constant", but supports it with one example comparison, where the difference is in phonology not vocabulary, and where the "poorer" language is in fact still slower -- which doesn't seem to me to offer much reassurance -- and then assumes that the same goes for thinking as for communication, which also seems entirely unjustified to me: I can speed up my speaking if my language is "naively" less dense, but I don't think at the same speed as I speak, and it's not at all obvious that the same speedup opportunities are available there.

[EDITED to fix miscapitalization of "MikkW", which I carelessly copied from the parent comment without checking.]

Replies from: Ericf
comment by Ericf · 2020-12-18T14:00:49.435Z · LW(p) · GW(p)

Yes, we're on the same page here in general. I was specifically objecting to your first paragraph, and noting that mikkw was using larger than necessary words throughout. Possibly unconsciously, though.

Replies from: gjm
comment by gjm · 2020-12-18T17:19:24.088Z · LW(p) · GW(p)

I still don't understand what in my first paragraph you were objecting to. (Sorry to belabour the point, but it seems like I'm missing something and I would prefer to avoid miscommunication if possible.) It seems like you think I was saying that MikkW was not using words fancier than strictly necessary throughout, but I wasn't: I was saying the opposite, which is the same thing you were saying.

The only other claims in that first paragraph were (1) that most of MikkW's usage of fancy words (and indeed most of MikkW's OP) was not primarily signalling and (2) that his explanation of vocabulary in terms of signalling was primarily signalling. Those are both disputable, but I don't think you said anything to dispute them.

So I'm still confused about what, specifically, you were objecting to in what I wrote. What am I missing?

comment by MikkW (mikkel-wilson) · 2020-12-18T05:57:16.859Z · LW(p) · GW(p)

A perfectly sufficient explanation for a lot of use of sophisticated words, it seems to me, is the obvious naive one. Sometimes a fancier word expresses a useful idea clearly and concisely, and the alternative would be circumlocution

The entire point I was trying to make with this post is that the "obvious naïve" explanation isn't perfectly sufficient. Yes, a more sophisticated word can communicate more bits, more information than a more common word. But since the cadence of information is constant, which is a direct consequence of the fact that the computer that is the human brain is capable of processing up to X bits of speech per second [1], but no more, when you increase the number of bits per word, you must compensate by speaking more slowly- you are right that nobody conciously asks "do I want to say many low-information words quickly, or a few high-information words slowly", but our brains will instinctively make this tradeoff, as I illustrated with Hawaiian and Mandarin in the post. If you look only at conscious processes, you will miss many, many of the most interesting things we homo sapiens do.

Once we realize that you can't actually communicate more information by using more nuanced words, the "obvious naïve" explanation doesn't actually explain anything at all- it just posits that you will put in extra effort to acheive the exact same results, which never happens in biology.

If you haven't read Simler and Hanson's The Elephant in the Brain, I would recommend reading it, it makes you think in a completely new way about how human cognition works.

[1] Based on my rough calculation in the first footnote of this post, X is roughly 60 bits / second. More important than the exact value is that there does exist some X.

Replies from: gjm
comment by gjm · 2020-12-18T10:32:42.681Z · LW(p) · GW(p)

As I just mentioned in a reply to someone else, I don't find your argument about the "cadence of information" convincing.

First, you speak of "the fact that the computer that is the human brain is capable of processing up to X = 60 bits of speech per second, but no more" to which I have to say: [citation needed]. So, objection 1: you haven't justified the claim that there is a fixed number of bits per second of language processing in the brain, nor have you made it clear what bits per second you are counting (the number of bits to specify what's being said when how much sophistication in prediction is available?).

Then, you say something with which do I agree: if your language is "denser" then you will tend to speak it slower.

But the argument in the OP goes way further with this than is justified by the evidence it cites. Objection 2: you compare exactly one pair of languages, Mandarin and Hawaiian; as it happens, my guess is that in broad terms the same pattern holds quite generally, but you really need more evidence. Objection 3: the difference you look at between these languages is in phonology, not in vocabulary; it's not obvious that the same goes for both. (Suppose some potential bottleneck in speech decoding "knows" only about sounds and not meanings; then it will impose a tradeoff between richness of phonology and communication speed, but not between richness of vocabulary and communication speed.) Objection 4: in your comparison, the "richer" language is still faster. Objection 5: you consider only spoken language; there might be similar effects for writing and typing, but it's not clear that they're the same. Objection 6: the comparison you offer as evidence is between average data rates, but the likely effect of losing bits of vocabulary is to make particular things harder to express; if communicating simple things becomes faster and communicating complex things becomes slower, this may not show up in such comparisons but could be a big deal. And, most critically, objection 7: all of this is about communication between people rather than thinking within one person's brain, and it's very far from clear that the tradeoffs are anything like the same. (When I am thinking in words, the speed at which I think is not at all the same as the speed at which I speak.)

When you learn physics, economics, art history, analytic philosophy, or whatever, a part of your learning consists of specialized words. If you don't have a word for "momentum", that doesn't stop you talking about momentum, but it makes it clumsier, and it makes higher-level thinking about related topics much clumsier. (Consider e.g. the statement "position and momentum are conjugate variables" in classical dynamics or quantum mechanics.) Imagine trying to do mathematics or physics or economics without a word for "derivative". If I am talking to someone about the behaviour of a system of particles, my use of words like "momentum" and "derivative" is not signalling sophistication, it's a more or less necessary part of expressing what needs to be said. I'm sure I could somehow get my reasoning across with a greatly impoverished vocabulary, but it would require a lot of mental effort that I don't normally need.

Most fancy words aren't technical terms of that sort, of course, and for the avoidance of doubt I am not claiming that if I had to say "much smaller" instead of "greatly impoverished" I'd be handicapped in the same sort of way as if I had to say "mass times velocity" and "change per unit time at infinitely small scales" instead of "momentum" and "derivative". (... Writing that has brought to my attention another way in which circumlocution may hamper communication and thought. "Momentum" is not always just mass times velocity; e.g., in Hamiltonian dynamics one has "generalized momenta", of which a familiar special case is "angular momentum", and using the term "momentum" for all these things is itself a valuable thing. But if you pick an "elementary" circumlocution like "mass times velocity" then it doesn't apply to the more advanced cases, and if you pick a "sophisticated" one like "variable conjugate to position in Hamiltonian dynamics" then, even leaving aside the fact that "conjugate" and "Hamiltonian" and "dynamics" are themselves fancy words, you get something that will make no sense to people who haven't studied the more advanced parts of the theory. Having a word we can use for the simpler concepts and generalize for the harder ones is super-valuable.)

Sorry, I interrupted myself. Again, I'm not saying that all fancy vocabulary has the same sort of value that technical terms do. Technical terms are just a particularly clear illustration of how enriching your vocabulary can make a genuine, and large, difference to how effectively you can think and communicate in a particular domain. (Also, I think some things that are not now technical terms entered the language as technical terms; if so, note that this is a mechanism of language growth that is clearly not driven by signalling.)

I do not agree that the "obvious naive explanation" explains nothing, and so far as I can see you've offered no argument to support that criticism; rather, you've said that it makes a prediction that you know will be false, namely that "you will put in extra effort to achieve the exact same results". I think that's all wrong, and I think at least some of it is wrong even if we stipulate that you're right about constant information cadence and about using circumlocutions being no loss. What extra effort do I put in when I say "momentum" instead of some circumlocution? What extra effort do I put in when I say "communication" instead of some circumlocution? I think that in both cases the "extra effort" is on the side of the circumlocutor.

I haven't yet read Hanson&Simler, but I have read a fair bit of other Hanson and I am aware that, crudely caricatured, he claims that everything is signalling. Again, I suspect that Hanson's talk about signalling is not all about signalling, and that he does it in part because it's an idea that sounds cynically sophisticated and hence high-status. It may be that in TEITB they present compelling evidence and arguments that would change my mind; for the most part, what I've read of Hanson (which is mostly blog posts, so may not be trying to be as rigorous as he could be) doesn't do that, but takes it for granted that if you can posit a kinda-plausible-sounding signalling-based explanation for something then it must be right.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-12-18T20:44:23.781Z · LW(p) · GW(p)

Objection 2: you compare exactly one pair of languages, Mandarin and Hawaiian; as it happens, my guess is that in broad terms the same pattern holds quite generally, but you really need more evidence. Objection 3: the difference you look at between these languages is in phonology, not in vocabulary; it's not obvious that the same goes for both.

This example was an intuition pump, to help identify the basic principle at play, not meant as a knockdown proof. But it’s no accident that the first two languages I thought of (which I chose as extremes of the richness vs speed spectrum) illustrate the point I made. If you do the same calculation for Spanish, Japanese, German, or any other language, we should expect to find the same pattern, that the bits per second comes out to the same cadence.

Objection 4: in your comparison, the "richer" language is still faster.

No. If you look again at the calculation of my rough estimate, you will notice that I didn’t use the actual speaking speed of Hawaiian, since I couldn’t find a good number for it, and instead plugged in the cadence of Japanese, which is slightly slower than Hawaiian, so the number I provide is an underestimate of the actual information speed of Hawaiian. Furthermore, even if the number wasn’t an underestimate, it’s not clear to me that the difference between 64.2 bits / second and 59.6 bits / second is statistically significant. (I don’t know the error bars, since my source was a secondary source and didn’t identify the paper where they got their numbers from)

Objection 5: you consider only spoken language; there might be similar effects for writing and typing, but it's not clear that they're the same.

I don’t see any reason to assume they’d be different, unless you know of a reason to think they won’t be. My theoretical justification (which I’m aware you aren’t yet sold on, but which generated the test of Hawaiian vs Mandarin that matched the prediction it made) holds just as well for written language as speech

Objection 6: the comparison you offer as evidence is between average data rates, but the likely effect of losing bits of vocabulary is to make particular things harder to express; if communicating simple things becomes faster and communicating complex things becomes slower, this may not show up in such comparisons but could be a big deal. 

This doesn’t seem correct to me. For example in Sona ‘momentum’ would be ‘ganru’: “matter” + “movement”, and ‘derivative’ would be ‘nuakiagu’: “change” + “speed” + “trend”, literally “rate of change”. In this case, ‘ganru’ is even shorter than the word it replaces, and ‘nuakiagu’ takes me about as long to say ‘derivative’ when I pronounce them in paces I find natural for each language, maybe even a little less.

In natural languages, simple words tend to be said quite quickly, while more complex words take more time to say - in general because they’re longer, or have more complex sequences of sound, while simple, common words tend to be shorter and easier to say. Similar effects should happen in a polysynthetic language (such as Sona)

 

If you don't have a word for "momentum", that doesn't stop you talking about momentum, but it makes it clumsier, and it makes higher-level thinking about related topics much clumsier.

Dancers have their own word for momentum: body-flight. ‘body-flight’ serves just as well to enable higher-level thinking about related topics, since once you have gotten familiar with the phrase, the brain treats it as one word, not as “body” + “flight”, except in addition to being able to serve all the same purposes ‘momentum’ serves, its meaning can also be easily inferred by someone who has never heard the word before. If you want to talk about angular momentum, you could just as easily say “angular bodyflight” in a world where physicists used the word ‘bodyflight’.

If I had to say "mass times velocity" and "change per unit time at infinitely small scales" instead of "momentum" and "derivative"

Of course you don’t want to use a definition as the handle you use for a word! But the thing is, you don’t have to do that (as illustrated above by ‘bodyflight’ and ‘momentum’).

I do not agree that the "obvious naive explanation" explains nothing, and so far as I can see you've offered no argument to support that criticism;

What? My previous comment was exactly an argument to show that.

What extra effort do I put in when I say "momentum" instead of some circumlocution?

You first had to learn the word ‘momentum’, and second, you had to store this sound and its meaning in the brain. To our concious selves, this doesn’t feel like work, but from a biological perspective, our brains have to do a lot of work to make that happen. In contrast, with ‘bodyflight’, you still have to be exposed to that fixed term before you can use it, but you can get a gist for what it means even if you have never heard it before; and your brain has to store less information to be able to go from the concept of bodyflight to the word ‘bodyflight’, because instead of storing an entire sequence of sounds, it can just point to words that it has already stored.

I haven't yet read Hanson&Simler, but I have read a fair bit of other Hanson and I am aware that, crudely caricatured, he claims that everything is signalling.

The first part of the book delves into why the brain does stuff (especially, but not exclusively, signalling) subconciously. I will also like to note that as far as the two authors go, I personally hold more esteem for Kevin Simler (who writes at Melting Asphalt) than Robin Hanson, I feel that Kevin’s biological explanations of many human phenomena delve into very interesting dynamics that have changed how I think about the species homo sapiens. I particularly like his blogpost Music in Human Evolution, written 5 years before The Elephant In The Brain was published, as a showcase for the kind of thinking Kevin does.

Replies from: gjm
comment by gjm · 2020-12-19T02:59:59.793Z · LW(p) · GW(p)

On objection 2: Your expectation of finding the same pattern whatever two languages you compare is not evidence that in fact it holds. For the avoidance of doubt, I do in fact expect a weak form of the pattern to hold near-universally: languages with, say, fewer bits needed to specify each phoneme will tend to be spoken with those phonemes occurring more rapidly. But you're making a substantially stronger claim -- that this will occur to just the extent required to hold the rate of information transfer constant, and that this will apply when there are vocabulary differences as well as when there are phonological differences -- and it seems to me that when making so strong a claim you really ought to provide more evidence. (Or else present it as a conjecture rather than a factual claim.)

On objection 4: ah, so in fact I should have said not that you did the calculation only for one pair of languages, but that in fact you didn't do the full calculation for any pair of languages!

On objection 6: I'm not sure I understand your objection to my objection :-). If "ganru" and "nuakiagu" actually express the same notions as "momentum" and "derivative" then all that means is that Sona does in fact have those words (I don't see that the fact that they are constructed from simpler parts is significant), and then I don't see what a comparison between Sona and English tells us about differences in vocabulary. (Of course it may only "have those words" for Sona speakers who have learned some physics and mathematics, but that's true of English too: to people without the relevant technical knowledge, "momentum" is the name of a splinter group in a UK political party and "derivative" means "copied from other works" or "one of those weird finance things".)

If you were only ever claiming that a language that lacks certain terms can be extended by adding new words that mean the same as those terms do ... well, sure, I agree, but I thought you were saying something much stronger than that.

Much the same goes for "body-flight". If that means (approximately?) the same thing as "momentum" then what that means is that ballet dancers have discovered some of the same ideas as physicists, and like physicists have coined a word for it. Again, the fact that it's a word made out of smaller meaningful parts doesn't seem relevant to me. Unless you're suggesting, here or in the case of Sona, that just from those parts you can work out the full meaning, so that if you use a smaller language then you never need to learn about momentum because you can just slam together "body" + "flight" and get the right concept by magic. But I bet you aren't suggesting that, because it seems to me very obviously not true.

What? My previous comment was exactly an argument to show that. [that my "obvious naive explanation" explains nothing -- gjm]

Well, as I said, I don't see anything in that comment that looks to me like an argument showing anything of the sort. Of course it's entirely possible that you did in fact make an argument, perhaps a very strong one, with that conclusion, and I just failed to grasp it. (The way it looks to me is that you made some non sequiturs.) If you want to persuade me, then I think you will need to make your argument clearer and more explicit. (Of course you are in no way obliged to make that effort.)

You first had to learn the word ‘momentum’, and second, you had to store this sound and its meaning in the brain. [this is answering my question of what extra effort I need to expend on account of saying "momentum" rather than using a circumlocution -- gjm]

OK, sure. But those are both one-off extra efforts, and (so it seems to me) this effort is amply repaid by the reduced effort every time I need to use the concept thereafter. (Compared, again, to a hypothetical situation in which I don't have a word for "momentum". It seems a bit as if you're now shifting to what seems to me an entirely different claim, namely that we would do better with a different word for "momentum", one whose origins are more transparent. That might be true but if it has anything to do with what you were originally saying, I don't see what.)

So it looks to me as if when I learned the word "momentum" I was paying an immediate price for a larger future benefit. This absolutely is a thing human brains do all the time.

Not every instance of learning a new word will end up actually being a benefit on net. (To take another technical example, once upon a time I learned the meaning of "regular" when applied to a topological space. So far as I can recall, I have never once had a need to use that term or the concept it names, and I probably never will again.) But it seems to me that much vocabulary-learning has positive expected net benefit.

Again, it's possible that the case of technical terminology is misleading; learning the meaning of "luxuriant" isn't much like learning the meaning of "momentum" and its benefits are different. So, while I think it's very obvious that "momentum" pays its way, I wouldn't make nearly so strong a claim for "luxuriant". But I think that, to put it mildly, it is not obvious that having more words has insufficient non-signalling value to explain the fact that lexicons grow, and I am never impressed by "I am not convinced that X is practically useful, therefore X must really be all about signalling", which is what it seems to me your argument comes down to: there are just too many guesses and gaps in the chain from "languages with richer phonology tend to be spoken slower", with which I do agree, to "languages with richer phonology are spoken exactly slower enough to cancel out the difference in information rate" and then to "the same applies to languages with richer vocabulary" and then to "and the same goes for thinking as for communication" and then to "and this doesn't merely hold on average, it holds for every specific case" -- which is the point at which I think you'd have a reasonable basis for claiming that there must be some "non-functional" explanation for large vocabularies, though not necessarily for identifying signalling as the specific best explanation.

Replies from: mikkel-wilson, mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-12-19T03:34:19.189Z · LW(p) · GW(p)

I am never impressed by "I am not convinced that X is practically useful, therefore X must really be all about signalling"

I don't mean to claim it is 100% necessarily about signaling, however I do mean to claim that A) there's a solid argument to believe that signaling plays a role, and B) that the "naïve obvious" answer has very little to do with it. (Regardless of whether you are convinced, this is the main claim of the post, and I stand by this claim) There could very well be other reasons that I haven't considered which make a large vocabulary useful that don't have to do with signaling, I don't know.

I am aware that I haven't proved beyond a shadow of a doubt that my claims are true, but I have given both a theoretical justification and actual examples that illustrate something that is interesting (according to me) and counterintuitive, which is more than enough to justify making a post here.

comment by MikkW (mikkel-wilson) · 2020-12-19T03:26:28.469Z · LW(p) · GW(p)

Storing things in memory isn't a one-off cost, since you need to keep it there, which takes up space, and I believe a non-zero amount of maintenance in the context of the human brain

comment by MikkW (mikkel-wilson) · 2020-12-18T02:30:58.849Z · LW(p) · GW(p)

And sometimes a fancier word has a sound that's better for your purposes for some reason

In non-poetry uses, I think you'd be hard-pressed to identify a word sounding good over alternatives that doesn't have to do with subconcious signaling. Sure, I sometimes smile when I listen to lyrical alliteration, but don't you smile since it signals smarts and sophistication? And OJ's lawyers put a rhyme to a use most sublime when they said "If the glove doesn't fit, you must acquit", but surely it's a good sign when a potential ally can make their sounds align.

Of course some poets sometimes may be signalling sophistication too.

Since when has poetry (or most art for that matter) been anything but signaling?

Replies from: gjm
comment by gjm · 2020-12-18T17:14:55.764Z · LW(p) · GW(p)

I don't at all deny that felicitous word choices may serve a signalling purpose. But e.g. if OJ's lawyers correctly guessed that the jury would find their rhyming nonsense persuasive because of the rhyme, then it seems to me that they used those word choices for a non-signalling purpose. Maybe in some sense they used brain mechanisms whose underlying function is signalling-related, but I don't think that's relevant here; unless I misunderstood, the point of your argument about signalling was something like "we have all these words only for the sake of signalling; this is a sufficient explanation for their use in our language; so it will do us no harm for non-signalling purposes to throw them out", and if some use of those words co-opts our signalling machinery for other purposes then the conclusion no longer holds.

I don't really know how to address a bare unevidenced unargued-for claim that something is "nothing but signalling", but it seems to me that the arts in general are not pure signalling. The fact that many people listen to music even when they're doing it through earphones and no one else can tell what they're listening to is some evidence of that, although of course it's not conclusive (and I don't see how anything could be).

"From the inside" it certainly seems that plenty of poetry is moving, plenty of paintings are pleasant to look at in ways that resemble (e.g.) the ways some actual landscapes, people, etc., are pleasant to look at, and so forth. I would say that that's already enough to show that these arts aren't only signalling. Depending on exactly what the proposition "X is nothing but signaling" means to you, you may well disagree; if so, and if you can give me an idea of what kind of evidence could possibly convince you otherwise, then I'm willing to argue the case :-).

comment by Massimog · 2020-12-18T04:35:23.340Z · LW(p) · GW(p)

Have you heard of the conlang Toki Pona? I'm not super familiar with it and it's community since I just learned about it recently but it's only got 123 root words, I've heard it claimed that you can learn it in a weekend, and (from my limited perspective) it seems quite popular in the wider conlang community.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-12-18T05:15:45.798Z · LW(p) · GW(p)

I've heard of if before, but that was many years ago, so I'm happy to be reminded of it

Update: After having looked at it for a bit, I'm struck by how ambiguous Toki Pona is. So far, it feels very different from Sona, which uses its slightly larger set of radicals to form compound words that can cover basically any word we have in English. Running with Simplified English would probably end up with something closer to Sona than Toki Pona (or what my experience with it so far suggests)

comment by ChristianKl · 2020-12-18T14:49:18.694Z · LW(p) · GW(p)

I think one way to counteract the lack of signalling afforded by vocabulary is to require something resembling an IQ test to be passed before being able to comment or post on the site 

Simply learning a new language will be enough as a barrier to entry to a site. Likely, it would be a high enough barrier for nobody to use the website. 

comment by gilch · 2021-07-01T21:46:35.721Z · LW(p) · GW(p)

Sure it will. There are higher-order concepts that you can only learn with language. I remember an interesting Radiolab episode about this.

One example from the episode was that rats get the individual concepts of "left" and "blue" and "wall", but the higher-order relation "left of the blue wall" is totally beyond their comprehension. This is the same for humans up until about the age of six, and even for adults if you deny them the use of language.

Another example was the case of deaf people in a poor African country. They couldn't learn spoken language, but when put in groups, they spontaneously invented sign language, but for the first generation it was lacking certain higher-order concepts, notably theory of mind. Adults in this group were failing tests for this that older children get right every time, but that younger children always get wrong, at least until they learned the concepts from the next generation of deaf children who had improved the sign language.

Replies from: mikkel-wilson, mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-07-02T01:02:25.044Z · LW(p) · GW(p)

I'm somewhat confused, it seems you are suggesting that a person who speaks a simplified version of English would not have language, but that is not the case. ~1,000 words is plenty to be able to express any idea (including high-order concepts - example), even if in a way that seems roundabout to us (because our conventions are built around an axiom of having a rich vocabulary). Basic English uses only 850 words, and the Simple English Wikipedia encourages using that list or similar there.

Replies from: gilch
comment by gilch · 2021-07-02T01:25:05.083Z · LW(p) · GW(p)

I am not disputing that some words can be explained with other words, or even with a relatively small subset of words. A dictionary of Simple English definitions of collegiate English vocabulary would be doable.

I'm disputing the title of the post. We have a real-world example of a language that lacked an important concept—theory of mind—and without that concept from language, the people were unable to even think it. This was explained better in the episode than my comment (there is a transcript too).

Newspeak might not have its intended effect on the first generation, since the people using it are still capable of thinking in their old language. (Though perhaps this ability could atrophy over time.) But the second generation who only ever learned Newspeak might lack access to certain important higher-order concepts altogether.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-07-02T02:39:11.039Z · LW(p) · GW(p)

Would you mind clarifying here whether you mean by »Newspeak«: A) Newspeak exactly as presented in 1984 or B) Simplified English (which Newspeak is a metaphor for, following both Orwell and me)?

If A), the title of this post was not meant to literally assert that 1984.Newspeak does not make someone stupid, and I clearly state in the post that Newspeak as implemented by Oceania is problematic; the title of the post should be understood as "Simplified English will not make you stupid" (a claim that Orwell would incorrectly disagree with me about). If B), I would then dispute the final paragraph of this comment.

comment by MikkW (mikkel-wilson) · 2021-07-02T00:51:20.946Z · LW(p) · GW(p)

Part of my point is that just as "Wall" and "blue" are words that can be assigned to meanings, "Left of the blue wall", if used enough times, will eventually become a word in its own right, even if one doesn't have the processing capabilities to combine the individual parts on the fly. If the researchers keep asking "go to the left of the blue wall", and seem happy when you go to this particular place (that happens to be next to a wall that is blue), you will eventually get the point, even if you don't realize why "left of the blue wall" is called what it is.

(I realize this doesn't address why it would be preferable to call it "left of the blue wall" instead of "shmaznag", which would be just as meaningful. I would expect "left of the blue wall" to be an easier fixed phrase to learn than "shmaznang", though)

I actually had a paragraph about this exact point in an earlier version of this post, but it was poorly written and somewhat confusing, so I removed it recently.

comment by chasmani · 2020-12-18T17:16:47.198Z · LW(p) · GW(p)

You might be interested in this paper, it supports the idea of a constant information processing rate in text. "Different languages, similar coding efficiency: Comparable information rates across the human communicative niche", Coupe, Mi Oh, Dediu, Pellegrino.. 2019, Science Advances.

I would agree that language would likely adapt to newspeak by simply using other compound words to describe the same thing. Within a generation or two these would then just become the new word. Presumably the Orwellian government would have to continually ban these new words. Perhaps with enough pressure over enough years the ideas themselves would be forgotten, which is perhaps Orwell's point.

I think the claim that sophisticated word use is caused by intelligence signalling requires more evidence. It is I'm sure one aspect of the behaviour. But a wider vocabulary is also beneficial in terms of being able to more clearly and efficiently disambiguate and communicate ideas. This could be especially true I think when communicating across contexts - having context specific language may help prevent misunderstandings that would arise with a more limited vocabulary. It would be interesting to try and model that with ideas from information theory. 

Replies from: ChristianKl
comment by ChristianKl · 2020-12-18T18:10:51.286Z · LW(p) · GW(p)

I would agree that language would likely adapt to newspeak by simply using other compound words to describe the same thing.

If language would eliminate a term like "fiance" do you really think that people would just use a compound to replace the term? I would expect that they will default to a more general term like "boyfriend"/"girlfriend".

A Hawaian might be able to say Makuakane as fast as an English speaker says father but Makuakane doesn't distinguish between father/uncle and thus the communicated information is going to be less. 

Replies from: chasmani
comment by chasmani · 2020-12-19T14:34:54.326Z · LW(p) · GW(p)

Good point. I think it would depend on how useful the word is in describing the world. If your culture has very different norms between “boyfriend/girlfriend” and fiancé then a replacement for fiancé would likely appear.

I suppose that on one extreme you would have words that are fundamental to human life or psychology e.g. water, body, food, cold. These I’m sure would reappear if banned. Then on the other extreme you have words associated with somewhat arbitrary cultural behaviour e.g. thanksgiving, tinsel, Twitter, hatchback. These words may not come back if the thing they are describing is also banned.

Uncle/father is an interesting one. Those different meanings could be described with compound words. Father could be “direct makuakane” and uncle “brother makuakane”, or something like that. We already use compound words in family relations in English like “grandfather” whereas Spanish it is “abuelo”.

Replies from: ChristianKl
comment by ChristianKl · 2020-12-20T18:45:34.337Z · LW(p) · GW(p)

While you could make up compounds I think there's a reason why those anthropolgists lists the hawaian has having the same kinship terms for both. 

Political terms like vote, representation, legitimatization and election might also not easily come back when a 1984 style government bans them along with the activities that they are about. 

comment by ChristianKl · 2020-12-18T14:47:46.580Z · LW(p) · GW(p)

In practice whenever you have fewer words for a subject you are more likely to run into the Motte-and-bailey fallacy. To get around the Motte-and-bailey you need to have terms that allow you to distinguish between related concepts. 

Having a separate words for norm, regulation and law is quite useful if you want to have a discussion where you make distinction between the concepts. Any specialized field of expertise develops it's own vocabulary and people outside of that field of expertise won't immediately understand the terms. If the field gives them well-defined meaning then people outside of the field won't immediately understand all the meaning that the terms have. 

A language like Sona or Esperanto allows the speaker to make up new words based on the roots. While a listener who has never heard a word before can make a guess about what it might mean based on the roots, the roots don't contain all the information that a scientific field gives a a term.

If I want to talk about the "Os subcalcaneum" you won't be able to package a bunch of your 375 stems together to make the concept in a way where a listener who hasn't learned what "Os subcalcaneum"  means will understand you. 

A large reason of why Esperanto has more words then Sona is that Esperanto speaker used the language and made up terms for a large variety of concepts that they wanted to speak about. 

Frequently, language evolves by burrowing terms from other languages "Os subcalcaneum" would for example be burrowed from Latin. At the moment a lot of new concepts first get English names because technology and science get primarily developed in English. 

If your language can't burrow the existing technological and scientific terms because only terms made up of your roots it will be hard to have a lot of meaningful discussions. 

comment by MikkW (mikkel-wilson) · 2020-12-18T00:58:18.652Z · LW(p) · GW(p)

This post is based upon a post I wrote 4 months ago [LW(p) · GW(p)] on my shortform [LW · GW]. I've been planning on polishing it up a little and posting it as a top-level post for some time now, so I'm happy to finally have it ready!

comment by Chris_Leong · 2020-12-18T23:31:11.185Z · LW(p) · GW(p)

Sure people can coin their own terms, without there being a standard term there is a greater chance of these being misunderstood.

Plus it's often not as easy as just saying it with other words - take for example the word freedom - what exactly does it mean? There's endless different ways of interpreting it what kind of freedom is important and if we give a single, simple definition then we've effectively shortcut the debate and moreover, one that will most likely have obvious counter-examples.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-12-19T00:05:14.578Z · LW(p) · GW(p)

Sure people can coin their own terms, without there being a standard term there is a greater chance of these being misunderstood.

As the language is used, standard terms will arise naturally

Replies from: ChristianKl
comment by ChristianKl · 2020-12-19T00:25:01.369Z · LW(p) · GW(p)

As the language is used, standard terms will arise naturally

And will have to be learned and then learning the language isn't anymore about just learning 350 things. 

comment by mukashi (adrian-arellano-davin) · 2021-07-02T01:08:43.113Z · LW(p) · GW(p)

Interesting post. 

I just finished listening to Lex Friedman's interview of Yeonmi Park (here), a North Korean survivor living in the US, which I fully recommend, it touches on some of the things discussed here. 

First, I agree with you in the fact that limiting our vocabulary does not necessarily mean that we won't be able to talk about a specific idea. I also think that signalling is part of the reason, though I don't think is the main one.

Words are forged to refer to clusters of reality that we consider worth referring to in an abridged way: remove the word, and you still can refer to that thing, but now it becomes harder and you need more effort.  So in this regard, I do think that removing words from our vocabulary really conditions the sort of things we can talk about.  You don't need to make something extremely people in order to make people not use it, just a bit more difficult is enough. Even if you develop your own words to refer to certain unnamed clusters, you still need to transmit what those clusters are to other people in order to make a conversation going, and convince them to use them, which also takes effort. 

comment by iamhefesto · 2020-12-19T13:22:20.132Z · LW(p) · GW(p)

Assuming "Cadence of Information is Constant" and we want to go into opposite direction (become "smarter") and increase the general effectiveness of, say, English then what are the other dimensions of the problem that could be helpful? In other words, in which directions existing languages can be improved?

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-12-19T17:58:16.361Z · LW(p) · GW(p)

Spaced Repetition, where a piece of information is presented multiple times, with exponentially increasing gaps between each repitition, works well to increase how much of the information we engage with actually sticks. Active Recall, where you are asked a question and have to remember the answer, also makes a big difference in how effectively you learn.

I don't know how these principles can be applied to languages specifically, but building a habit of using these (via Anki) is probably the most effective thing we can do to increase the amount of information our brains engage with. I make a habit of studying Anki for 1/2 an hour every day, covering things like mathematics, language, physics, poetry, astronomy, even Chess and Go strategy. Since your question is about how we can increase interpersonal communication, I suspect building a culture of making and sharing high-quality Anki decks can streamline the transfer of information.

While I do posit that our brains naturally feel comfortable engaging with a fixed cadence of information regardless of the richness of our vocabulary, I'm not convinced that our natural pace pushes our brains to our maximum processing capacity- if you listen to a podcast at a faster speed than it was recorded, it does seems that you can take in more information. However, my personal experience shows that if I talk faster than a natural pace, people tend to get confused and frustrated with me, and tell me to slow down. Perhaps the situation is simply that normal people process information optimally at our regular speaking pace, but smart people can process information at a higher speed, so can benefit from a faster speed of communication. Of course, we can reap the benefits of this simply by increasing the speed at which we listen to things, we don't need richer vocabularies to invoke this effect

comment by seed · 2020-12-19T01:17:46.590Z · LW(p) · GW(p)

Do you know anyone who wants such a forum?

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-12-19T03:19:51.778Z · LW(p) · GW(p)

Not really

comment by jensmikalo · 2020-12-18T06:09:15.616Z · LW(p) · GW(p)

One thing that stuck out to me was the suggestion of a separate badge, determined by a standardized test of some sort, as a proxy for the intelligence signaling that would be unavailable to speakers of a language devoid of the synonyms we have (which as you pointed out, exist on a gradient of intellectual prestige).

I'm more interested in the implications of the opposite - a forum where pure concepts are in dialogue without the trappings or filters afforded by the diverse ways to express them. It could establish sort of an anonymous meritocracy, forcing people to navigate the logic of others' perspectives objectively. It would likely ameliorate some of the prejudice associated with various dialects, or the lack of them. 

It would depend on, and I am myself unsure of this, whether one thinks those trappings are misleading or convey important information. When we leave out the subtext provided by phrasing, diction, or even, (as has started many ad hominem/ad spelling-em arguments on the internet), grammar and spelling — are we purifying an idea to engage in more objective conversation, or missing out on valid portions of meaning? 

I agree with the overarching premise that ideas exist independently of the words we use to describe them. But whether different semantics elucidate, inform, or obscure the ideas themselves is a rabbit hole that's way above my paygrade. 

I could phrase my second paragraph "If people could judge each other based on their ideas, and not the way they talk, we could avoid certain ideas being ignored for not important reasons. We could also avoid ideas being respected for not important reasons". 

It's pretty much the same. It's hard for me to pinpoint what's gained or lost, or more importantly, whether what's gained or lost is valuable. 

As an aside, would it be possible that a reduced lexicon, out of which complex concepts had to be assembled like legos, would diversify our range of thought through invention, or make room for more creative associations? I could see such a language being a fertile place, and not a graveyard, for 'poetry', or more intuitive meaning-making. I'm sure everyone knew what I meant by ad spelling-em arguments on the internet. And that's not a 'real word'. 

But to bring even that back to the concept-over-dialect idea - don't we all have an instinctive smirk when someone who disagrees with us uses the incorrect version of 'your'? I know, in my mind, I experience the easiest emotion - contempt - first, and only analyze the opposite opinion if I have the energy and inclination. Some of you might be better at this than I am, but the internet is evidence that many are not. 

In the end, maybe it wouldn't matter at all. We could extrapolate, and become opinionated on, others' supposed worldviews (level of consciousness) even if we were all communicating with the same lexicon. People who don't think critically now likely won't start in a new semantic setting. 

A good chunk of the material on this site makes me realize why Robert Pirsig's Pheadrus went insane. Every good answer only gives rise to more questions. It's difficult (for me) to find a convergent conclusion to divergent thinking. 

comment by Stuart Anderson (stuart-anderson) · 2020-12-18T04:37:55.691Z · LW(p) · GW(p)

-

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-12-18T05:23:13.788Z · LW(p) · GW(p)

If you want an exclusive technocratic bureaucracy

Uhh... what?

We have gotten by just fine not having a global monoculture before, so I see no reason to assume that we'll all go on a killing spree now.

Have you never studied history? You don't even have to go back an entire century for a good example of what homo sapiens is capable of, and if you look just at the past millennium, I see plenty of reason to worry about what you call "killing sprees"

comment by iamhefesto · 2020-12-19T13:27:14.680Z · LW(p) · GW(p)