Relevance Norms; Or, Gricean Implicature Queers the Decoupling/Contextualizing Binary
post by Zack_M_Davis · 2019-11-22T06:18:59.497Z · LW · GW · 30 commentsContents
30 comments
Reply to: Decoupling vs Contextualising Norms [LW · GW]
Chris Leong, following John Nerst, distinguishes between two alleged discursive norm-sets. Under "decoupling norms", it is understood that claims should be considered in isolation; under "contextualizing norms", it is understood that those making claims should also address potential implications of those claims in context.
I argue that, at best, this is a false dichotomy that fails to clarify the underlying issues—and at worst (through no fault of Leong or Nerst), the concept of "contextualizing norms" has the potential to legitimize derailing discussions for arbitrary political reasons by eliding the key question of which contextual concerns are genuinely relevant, thereby conflating legitimate and illegitimate bids for contextualization.
Real discussions adhere to what we might call "relevance norms": it is almost universally "eminently reasonable to expect certain contextual factors or implications to be addressed." Disputes arise over which certain contextual factors those are, not whether context matters at all.
The standard academic account explaining how what a speaker means differs from what the sentence the speaker said means, is H. P. Grice's theory of conversational implicature. Participants in a conversation are expected to add neither more nor less information than is needed to make a relevant contribution to the discussion.
Examples abound. If I say, "I ate some of the cookies", I'm implicating that I didn't eat all of the cookies, because if I had, you would have expected me to say "all", not "some" (even though the decontextualized sentence "I ate some of the cookies" is, in fact, true).
Or suppose you're a guest at my house, and you ask where the washing machine is, and I say it's by the stairs. If the machine then turns out to be broken, and you ask, "Hey, did you know your washing machine is broken?" and I say, "Yes", you're probably going to be pretty baffled why I didn't say "It's by the stairs, but you can't use it because it's broken" earlier (even though the decontextualized answer "It's by the stairs" was, in fact, true).
Leong writes:
Let's suppose that blue-eyed people commit murders at twice the rate of the rest of the population. With decoupling norms, it would be considered churlish to object to such direct statements of facts. With contextualising norms, this is deserving of criticism as it risks creates a stigma around blue-eyed people.
With relevance norms, objecting might or might not make sense depending on the context in which the direct statement of fact is brought up.
Suppose Della says to her Aunt Judith, "I'm so excited for my third date with my new boyfriend. He has the most beautiful blue eyes!"
Judith says, "Are you sure you want to go out with this man? Blue-eyed people commit murders at twice the rate of the general population."
How should Della reply to this? Judith is just in the wrong here—but not as a matter of a subjective choice between "contextualizing" and "decoupling" norms, and not because blue-eyed people are a sympathetic group who we wish to be seen as allied with and don't want to stigmatize. Rather, the probability of getting murdered on a date is quite low, and Della already has a lot of individuating information about whether her boyfriend is likely to be a murderer from the previous two dates. Maybe (Fermi spitballing [LW · GW] here) the evidence of the boyfriend's eye color raises Della's probability of being murdered from one-in-a-million to one-in-500,000? Judith's bringing the possibility up at all is a waste of fear in the same sense that lotteries are said to be a waste of hope [LW · GW]. Fearmongering about things that are almost certainly not going to happen is uncooperative, in Grice's sense—just like it's uncooperative to tell people where to find a washing machine that doesn't work.
On the other hand, if I'm making a documentary film interviewing murderers in prison and someone asks me why so many of my interviewees have blue eyes, "Blue-eyed people commit murders at twice the rate of the rest of the population" is a completely relevant reply. It's not clear how else I could possibly answer the question without making reference to that fact!
So far, relevance has been a black box [LW · GW] in this exposition: unfortunately, I don't have an elegant reduction that explains what cognitive algorithm [LW · GW] makes some facts seem "relevant" to a given discussion. But hopefully, it should now be intuitive that the determination of what context is relevant is the consideration that is, um, relevant. Framing [LW · GW] the matter as "decouplers" (context doesn't matter!) vs. "contextualizers" (context matters!) is misleading because once "contextualizing norms" have been judged admissible, it becomes easy for people to motivatedly derail any discussions they don't like [LW · GW] with endless isolated demands for contextualizing disclaimers.
30 comments
Comments sorted by top scores.
comment by cousin_it · 2019-11-22T13:09:21.816Z · LW(p) · GW(p)
Many people have tried to draw such lines: literal vs contextual, mistake vs conflict theory, science vs religion. My line is mostly about reactions to disagreement. An engineer will say ok, I'll be in my garage working on my flying car. A politician will say ok, let's find points of contact. But a fanatic will call their friends and paint a target on me for disagreeing. I wouldn't do that to anyone for disagreeing with me, so it seems like a pretty sharp line.
comment by Matt Goldenberg (mr-hire) · 2021-01-11T16:15:36.947Z · LW(p) · GW(p)
I chose this particular post to review because I think it does a great job of highlighting soe of the biases and implicit assumptions that Zack makes throughout the rest of the sequence. Therefore this review should be considered not just a review of this post, but also all subsequent posts in Zack's sequence.
Firstly, I think the argument Zack is making here is reasonable. He's saying that if a fact is relevant to an argument it should be welcome, and if it's not relevant to an argument it should not be.
Throughout the rest of the sequence, he continues to justify this basic position with underlying epistemology and math. He makes that case that language is there to help you make predictions, shows how hard it is to predict people's views when there are things they aren't allowed to talk about. He makes the case that drawing boundaries on natural categories is important to make language useful.
However, I believe that throughout the sequence, the position he's implicitly arguing against is the way he defines contextualizing in a reply to me here:
Contextualizers" think that the statement "Green-eyed people commit twice as many murders" creates an implicature that "... therefore green-eyed people should be stereotyped as criminals" that needs to be explicitly canceled with a disclaimer, which is an instance of the more general cognitive process by which most people think that "The washing machine is by the stairs" creates an implicature of "... and the machine works" that, if it's not true, needs to be explicitly canceled with a disclaimer ("... but it's broken"). "Decouplers" don't think the statement about murder rates creates an implicature about stereotyping.
And, I believe this is a straw man. In particular, I believe Zack is being quite idealistic, building up a model of how language should be used, while ignoring the many ways language is actually used in practice.
If indeed, language were only used for clearing up confusions, I think the argument would be quite one sided, and language should in fact only be used in the way's Zack suggests. However, here's a list of ways that language is used that don't fit cleanly into that category:
- Convincing and motivating people to take action
- Creating positive or negative affect towards a particular idea or group.
- Quoting someone out of context to portray their positions a certain way.
- Describing a felt sense, or evoking it in someone else.
In general, the language I use isn't just changing the predictions that people make, it's affecting their emotions, it can be quoted or used to paint others in a certain light, it's being crosschecked with other similar language and affecting people's emotional affect.
And the rest of the sequence largely ignores this. It paints the argument between a robust case that using language correctly helps you make more effective predictions, vs the choice to not hurt others feelings.
But the truth is, there's a whole set of powerful consequentialist arguments about why you might want to consider the context of things like how it will effect the other persons affect towards what you're talking about, how it will be quoted, how it could be used to paint you or the groups your affiliated with in a certain light, etc.
I don't believe a sequence that has such an implicit straw man should be included in the common knowledge of lesswrong, and I believe this largely for many of the broader contextualizing arguments around the effect that this implicit straw man could have on the broader culture of LW.
Replies from: orthonormal, Raemon↑ comment by orthonormal · 2021-01-12T15:09:42.387Z · LW(p) · GW(p)
Ironically enough for Zack's preferred modality, you're asserting that even though this post is reasonable when decoupled from the rest of the sequence, it's worrisome when contextualized.
↑ comment by Raemon · 2021-01-19T04:34:44.304Z · LW(p) · GW(p)
On one hand, I think I probably agree with the overall thrust of your criticism. But I don't think I endorse it in the context of the review.
Some of the posts in the review are sort of a stand-in for a whole sequence (Moral Mazes, Multiagent Models), but I don't think Zack's posts are (or at least I have not interpreted them as such). So I think it makes more sense to look at the given particular posts up for review and for each of them ask "okay, is the writing on this particular post embodying a wrong or confusing frame?"
I think the answer is yes in some cases, no in others.
In general: I probably still have some deep disagreements with Zack. (I'm not entirely sure, see below). But I also think I've learned a bunch from watching him follow this train of thought, and been impressed with how thoroughly he investigates it. And I don't think makes sense to write a sequence off because the author has a frame you disagree with. I'm evaluating our intellectual progress at the group level [LW · GW], and I think it's a pretty key tool in the toolkit for individuals to take their assumptions and run with them and see how far they can get.
I think it's more useful to flag specific places where you think he's making a mistake on individual posts, than to make a vague metacriticism.
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2021-01-19T13:14:23.518Z · LW(p) · GW(p)
I agree that it's intellectually fruitful to take assumptions and run with them, but I'm wary about them being enshrined in a book, a static place without comments, that can't be contextualized with critical comments or future work.
I think that if meta-criticisms of the implicit approaches and frames are not allowed, we can end up in similar issues that e.g. the integral community ran into where they had a lot of reasonable sounding and fruitful ideas that nevertheless ended up in quite problematic and unproductive places because no one was pointing out the subtle places where the whole methodology was incomplete or flawed.
Indeed, a lot of my worry around the particular intellectual direction of LW is informed by a look at what happened with integral and Wilbur.
Replies from: Raemon↑ comment by Raemon · 2021-01-19T18:05:35.837Z · LW(p) · GW(p)
I'd still argue that at the level of individual posts rather than the sequence as a whole. (In this particular context. If there was something like a "Algorithms of Deception Sequence Book" getting considered as a whole I'd have a pretty different attitude)
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2021-01-19T20:39:57.563Z · LW(p) · GW(p)
Want to doublecrux on this?
Replies from: Raemon↑ comment by Raemon · 2021-01-19T20:45:57.739Z · LW(p) · GW(p)
Hmm. I don't I want to commit to a huge discussion of it. I'm happy to continue doing async LW comments about it. I'm busier than usual this month. There might turn out to be a day I had a spare hour or two to chat in more detail but don't think I want to spend cognition planning around that.
I think I've mostly said my main piece and am fairly happy with "LW members can read what Matt and Ray have said so far and vote accordingly." If you raise specific points on specific posts I (and others) might change their vote for those posts.
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2021-01-19T20:56:10.757Z · LW(p) · GW(p)
Yeah so I think my thought on this is that it's often impossible to point at these sorts of missing frames or implicit assumptions in a single post. In my review of Liron's post I was able to pull out a bunch of quotes pointing to some specific frames, but that's because it was unusually dense with examples.
In the case of this post, if I were to do the same thing, I think I'd have to pull out quotes from at least 3-4 of the posts in the sequence to point to this underlying straw man (in this case I didn't actually do that and just sort of hoped others could do it own their own through reading my review).
Replies from: Raemon↑ comment by Raemon · 2021-01-19T21:34:59.311Z · LW(p) · GW(p)
That seems true, but I think it still makes sense to concentrate the discussion on particular posts. (Zack specifically disavowed this post and the meta-honesty response, so I think it makes most sense to concentrate on Where To Draw The Boundaries and Heads I Win, Tails Never Heard Of Her)
I think it's reasonable to bring up "this post seems rooted in a wrong frame" on both of those, linking to other examples. But my own voting algorithm for those posts will personally be asking "does this single post have a high overall mix of 'true' and 'important'?"
I think most posts in the review, even the top posts, have something wrong with them, and in some cases I disagree with the author about which things are wrong-enough-to-warrant-fixing. I do feel that the overall review process isn't quite solid enough for me to really endorse the Best Of book as a statement of "The LessWrong Community fully endorses this post", and I think that's a major problem to be fixed for next year. But meanwhile I think it makes more sense to accept that some posts will have flaws.
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2021-01-20T22:19:13.925Z · LW(p) · GW(p)
Zack specifically disavowed this post and the meta-honesty response, so I think it makes most sense to concentrate on Where To Draw The Boundaries and Heads I Win, Tails Never Heard Of Her
Ahh, I didn't realize that, definitely would not have reviewed this post if I realized this was the case.
But my own voting algorithm for those posts will personally be asking "does this single post have a high overall mix of 'true' and 'important'?"
Yeah I think this is reasonable. I'm worried about thinks that are wrong is subtle non-obvious ways with certain frames or assumptions because it's easy for those to sneak in under the radar of someone's way of thinking, but I think it's reasonable to not worry about that as well.
comment by Chris_Leong · 2019-11-22T11:16:51.301Z · LW(p) · GW(p)
I'd suggest that it should be understood as a spectrum rather than a dichotomy. Dichotomies are just easier to explain.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2019-11-22T16:28:37.664Z · LW(p) · GW(p)
I'm saying we should look at which context people consider relevant, not the amount of contextualizing they want.
Suppose Geraldine is a member of a political coalition that draws disproportionate support from green-eyed people, and Paulette is a member of a rival political coalition that draws disproportionate support from purple-eyed people. Whenever anyone says, "Green-eyed people commit twice as many murders," Geraldine objects that the speaker should have disclaimed that they're not saying green-eyed people should be stereotyped as criminals, but Paulette does not object. Whenever anyone says, "Purple-eyed people can't hear music," Paulette objects that the speaker should have disclaimed that they're not saying purple-eyed people should be stereotyped as uncultured, but Geraldine does not object.
Even if contextualizing/decoupling is a spectrum rather than a dichotomy, it doesn't help us understand the difference between Geraldine and Paulette: both of them demand contextualizing disclaimers, but in different situations. I think Geraldine and Paulette's behavior is explicable using the standard theory of implicature plus motivated reasoning governing what context seems "relevant" to them.
Replies from: Chris_Leong↑ comment by Chris_Leong · 2019-11-24T00:08:53.944Z · LW(p) · GW(p)
Well, it's hardly unusual that you can shift from viewing this as a dichotomy -> spectrum -> contextual depending on how much detail you want to go into.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2019-12-06T03:35:57.583Z · LW(p) · GW(p)
I think I'm doing something more substantive than that.
I agree that it's not unusual to be able to look at the dichotomy of whether people exhibit behavior B, or the spectrum of how often they exhibit B, or the contexts in which they exhibit B, depending on how much detail you want to go into.
However, whether (or to what extent, or in what contexts) the dichotomy and spectrum views constitute a useful "dimensionality reduction", depends on the particular value of B. If B = "smiling", then I do expect the spectrum view to be a reasonable proxy for general happiness levels. But if B = "typing the letter 'q'", I don't expect the spectrum view to measure anything interesting.
It's certainly possible that there's a "general factor" of contextualizing—that people systematically and non-opportunistically vary in how inferentially distant [? · GW] a related claim has to be in order to not create an implicature that needs to be explicitly canceled if false. But I don't think it's obvious, and even if it's true, I don't think it's pedagogically wise [LW · GW] to use a politically-motivated appeal-to-consequences as the central case of contextualizing.
comment by Matt Goldenberg (mr-hire) · 2019-11-22T14:22:33.995Z · LW(p) · GW(p)
Or suppose you're a guest at my house, and you ask where the washing machine is, and I say it's by the stairs. If the machine then turns out to be broken, and you ask, "Hey, did you know your washing machine is broken?" and I say, "Yes", you're probably going to be pretty baffled why I didn't say "It's by the stairs, but you can't use it because it's broken" earlier (even though the decontextualized answer "It's by the stairs" was, in fact, true).
It doesn't seem like this is what either Chris Leong's post or John Nerst's post was about.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2019-11-22T16:10:57.792Z · LW(p) · GW(p)
The intent of that paragraph is to provide an example illustrating the general concept of implicature, which explains when it makes sense to object that a literally true statement should have been provided with more context.
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-11-22T16:31:44.794Z · LW(p) · GW(p)
Right, but the implacature of contextualizing in those posts has nothing to do with implacature.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2019-11-22T16:40:53.614Z · LW(p) · GW(p)
Yes, it does. "Contextualizers" think that the statement "Green-eyed people commit twice as many murders" creates an implicature that "... therefore green-eyed people should be stereotyped as criminals" that needs to be explicitly canceled with a disclaimer, which is an instance of the more general cognitive process by which most people think that "The washing machine is by the stairs" creates an implicature of "... and the machine works" that, if it's not true, needs to be explicitly canceled with a disclaimer ("... but it's broken"). "Decouplers" don't think the statement about murder rates creates an implicature about stereotyping.
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-11-22T18:22:53.734Z · LW(p) · GW(p)
I don't think it's necessarily about implacature. It's often about being taken "out of context" or used as a justification. That is, I may not think "green eyed people commit twice as many murders" implies anything about stereotyping, but I still think it may lead to more stereotyping due to motivated reasoning. It's much more about consequences rather than implications.
Edit:
To expand on this:
There are several types of context:
- The context in which it is said (Implacature)
- The context about the state of mind/biases which the other person is in when hearing it (Inference)
- The context in which what was said may be used (culture).
Contextualizing vs. decoupling seem much more about the the latter two to me - No one is arguing that you shouldn't be clear in your speech. The question is how much you should take into account other people and culture. That is, decouplers often decouple from consequences and focus merely on implacature, whereas contextualizers try and contextualize how what they say will be interpreted and used.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-11-22T19:27:47.040Z · LW(p) · GW(p)
Meta: it’s implicature. The second vowel is an i.
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-11-22T20:20:18.958Z · LW(p) · GW(p)
Yeah I realized that when reading through but going back and changing everything feels pointless since you basically get the implicature of what I was trying to say.
comment by Zvi · 2021-01-09T18:14:12.302Z · LW(p) · GW(p)
At first when I read this, I strongly agreed with Zack's self-review that this doesn't make sense to include in context, but on reflection and upon re-reading the nominations, I think he's wrong and it would add a lot of value per page to do so, and it should probably be included.
The false dichotomy this dissolves, where either you have to own all implications, so it's bad to say true things that imply things that are true but focus upon would have unpleasant consequences, or it has to be fine to ignore all the extra communication that's involved in what you chose to say in the place and way that you said it - it's not something created by Chris Leong or John Nerst, it's something common, and worth dissolving.
And this does that quite efficiently, while suggesting a very good common sense solution that, while not fully specified or complete because that's not really possible here, seems clearly the right approach.
comment by Raemon · 2019-11-24T03:39:24.005Z · LW(p) · GW(p)
This post seemed helpful for advancing my overall understanding of the "discourse norms landscape." I agree that "what counts as contextually relevant" is often one of the more important questions to be asking. In general, I think it's hard to judge an ontological/category-suggesting post, and I think this class of essay is roughly the right way to engage with it from a critical perspective.
I do feel sort of confused about the framing of this post as a rebuttal to "Decoupling vs Contextualization" though – in particular I'm a bit surprised about the combination of opinions you're expressing here, and elsewhere.
It seems like the main substance you're responding to is "Decoupling/Contextualization promotes to attention that Contextualization might be a preferred norm-set, and that opens the door to all kinds of political shenanigans." And I think that's true – but... it's not really what I thought of as the point Decoupling/Contextualization was making.
The question I interpreted D/C to be addressing was not "what sort of norms are good?" but "what sort of norm conflicts might you run into in the wild, that will make discourse more confusing and difficult?". And when evaluating that post, the key question I'd be asking is not "is one of these norm-sets dangerous to encourage?" but "is this actually a meaningful way to carve reality?"
If Decoupled/Contextualized is an common way that different people approach discussions, that's really important to be able to talk about! Especially if one of those ways is epistemically dangerous!
If it's not an important difference that's causing discourse to be confusing/difficult, then it makes sense not to incorporate it into our longterm jargon.
I'm not quite sure your intended reading here, but... it feels like you're saying "this categorization is promoting to attention a concept that's harmful to our discourse... therefore we shouldn't have this categorization", which... seems very different from what you've historically argued for.
What surprises me is the combination of your concern for this here, but your praise of Local Validity as a Key to Sanity and Civilization – I think Local Validity was an important concept, and I saw the Decoupled/Contextual norms post as an important complement to it.
(Switching to focus on my own opinions rather than my confusions about yours).
Decoupling seemed to be a key cultural component that enables Local Validity checking. (It's not identical to local validity, but seems related). And decoupling is hard, or at least not something most people do by default. Language is often muddled together with politics, and it takes a special culture to enable them to be separated.
(When I first read your post, here, I thought "oh, yeah maybe D/C isn't a useful joint to carve here", but it was re-reading Local Validity that shifted me back towards "hmm, if D/C isn't a useful joint to carve, that sort of implies that Local Validity isn't as important a concept. I'm in fact fairly certain Local Validity is important, and upon reflection there are definitely people who don't have that concept. And while the Local Validity is essay is good, I think the D/C post is much shorter, while also addressing a different slicing of the problem that is sometimes more relevant)
So my current epistemic state, taking this post and those others all together, is take the arguments in this post as more of a "Yes, and" rather than a "No, but" to the D/C post.
Replies from: Zack_M_Davis, Raemon↑ comment by Zack_M_Davis · 2019-11-24T19:42:06.722Z · LW(p) · GW(p)
but "what sort of norm conflicts might you run into in the wild, that will make discourse more confusing and difficult?".
And if both of two purported norm-sets are wrong (i.e., fail to create accurate maps) in different ways, then taxonomizing possible norm-sets along that axis will be even more confusing. If some bids for contextualization are genuinely clarifying (e.g., pointing out that your interlocutor is refuting a weak man, and that readers will be misled if they infer that there do not exist any stronger arguments for the same conclusion [LW · GW]), but others are obfuscating (e.g., appeals-to-consequences of the side effects of a belief, independently of whether the belief is true), then lumping them both together under "contextualizing norms" will confuse people who are searching for clarity-creating norms. (In future work, I want to "pry open the black box" of relevance, which I'm relying on for a lot despite being confused about how it works.)
What surprises me is the combination of your concern for this here, but your praise of Local Validity as a Key to Sanity and Civilization
The part of you that was surprised by this will be even more surprised by my next planned post, an ultra-"contextualizing" reply to "Meta-Honesty" [LW(p) · GW(p)]! (But it should be less surprising when you consider the ways in which my earlier "Heads I Win, Tails?—Never Heard of Her" [LW · GW] is also ultra-"contextualizing.")
↑ comment by Raemon · 2019-11-24T03:48:57.727Z · LW(p) · GW(p)
Some further potentially relevant thoughts:
Some upcoming content in my Doublecrux/Frames sequence is that while there is some "arbitrariness" to frames, there are facts of the matter of whether a frame is consistent or self-defeating, and whether it is useful for achieving particular goals.
I think an important skill for a rationalist culture to impart is "how to productively disagree on frames", and figure out under what circumstances you should change your frame. I think this is harder than changing your beliefs, and some of the considerations are a bit different. But, still an important skill.
Nonetheless, because frames are often wound up in one's identity (even moreso than beliefs), it's often actively unproductive to jump to "Frame X is worse than Frame Y". In my Noticing Frames post, I avoided getting too opinionated about which frames were good for which reasons, because the main point was being able to notice differing frames at all, and it's easier to focus on that when you're not defensive.
It feels like something similar is at play in the Decoupled/Contextual post. My impression is that you're worried about it shifting the overton window towards "arbitrary politically motivated bids for contextual sensitivity". But one major use case for the D/C post is to introduce people who normally think contextually, to the idea that they might want to think decoupled sometimes. And that's easier to do without putting them on the defensive.
comment by Dagon · 2019-11-22T22:12:02.047Z · LW(p) · GW(p)
THANK YOU for talking about behaviors and expectations for communications, and recognizing that these are tactics for communication and/or persuasion (and/or signaling), not attributes of a given speaker.
One of my main objections to the previous discussion was that they were saying "contextualizer" as a description of a person's style, rather than "(over-)contextualized interpretation" as a description of a given communication's intent or interpretation.
I use both contextual and decoupled aspects, depending on my audience and intent. Communication style is an an active choice.
comment by Zack_M_Davis · 2020-12-30T05:14:33.276Z · LW(p) · GW(p)
(Self-review.) I oppose including this post in a Best-of-2019 collection. I stand by what I wrote, but it's not potential "canon" material, because this was a "defensive" post for the 2018 Review [LW(p) · GW(p)]: if the "contextualizing vs. decoupling" idea hadn't been as popular and well-received as it was, there would be no reason for this post to exist.
A standalone Less Wrong "house brand" explanation of Gricean implicature (in terms of Bayesian signaling games [LW · GW], probably?) could be a useful reference post, but that's not what this is.
comment by Vaniver · 2020-12-12T19:55:21.621Z · LW(p) · GW(p)
There's a set of posts by Zack_M_Davis in a similar vein that came out in 2019; some examples are Maybe Lying Doesn't Exist [LW · GW], Firming Up Not-Lying Around Its Edge Cases Is Less Broadly Useful Than One Might Think [LW · GW], Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk [LW · GW], and this one. Overall, this was the one I liked the most at the time (as evidenced by strong-upvoting it, and only weak or not upvoting the others). It points clearly at a confusion underlying a common dichotomy, in a way that I think probably changed the discourse afterwards for the better.
comment by habryka (habryka4) · 2020-12-15T07:00:39.153Z · LW(p) · GW(p)
This post gave specific words to a problem I've run into many times, and am just pretty glad to have words for. It also became relevant in a bunch of contexts I was in.