Decoupling vs Contextualising Norms
post by Chris_Leong · 2018-05-14T22:44:51.705Z · LW · GW · 51 commentsContents
Further reading: None 51 comments
One of the most common difficulties faced in discussions is when the parties involved have different beliefs as to what the scope of the discussion should be. In particular, John Nerst identifies two styles of conversation as follows :
- Decoupling norms: It is considered eminently reasonable to require your claims to be considered in isolation - free of any context or potential implications. An insistence on raising these issues despite a decoupling request are often seen as sloppy thinking or attempts to deflect.
- Contextualising norms: It is considered eminently reasonable to expect certain contextual factors or implications to be addressed. Not addressing these factors is often seen as sloppy, uncaring or even an intentional evasion.
Let's suppose that blue-eyed people commit murders at twice the rate of the rest of the population. With decoupling norms, it would be considered churlish to object to such direct statements of facts. With contextualising norms, this is deserving of criticism as it risks creates a stigma around blue-eyed people. At the very least, you would be expected to have issued a disclaimer to make it clear that you don't think blue-eyed people should be stereotyped as criminals.
John Nerst writes (slightly edited): "To a contextualiser, decouplers’ ability to fence off any threatening implications looks like a lack of empathy for those threatened, while to a decoupler the contextualiser's insistence that this isn’t possible looks like naked bias and an inability to think straight"
For both these norms, it's quite easy to think of circumstances when expectations for the other party to use these norms would normally be considered unreasonable. Weak men are superweapons demonstrates how true statements can be used to destroy a group's credibility and so it may be quite reasonable to refuse to engage in low-decoupling conversation if you suspect this is the other person's strategy. On the other hand, it's possible to use a strategy of painting every action you dislike to be part of someone's agenda (neo-liberal agenda, cultural marxist agenda, far right agenda, ect. take your pick). People definitely have agendas and take actions as a result of this, but the loose use of universal counter-arguments should rightly be frowned upon.
I agree with the contextualisers that making certain statements, even if true, can be incredibly naive in highly charged situations that could be set off by a mere spark. On the other hand, it seems that we need at least some spaces for engaging in decoupling-style conversations. Elizier wrote an article on Local Validity as a Key to Sanity and Civilisation [LW · GW]. I believe that having access to such spaces is another key.
These complexities mean that there isn't a simple prescriptive solution here. Instead this post merely aimed to describe this phenomenon, as at least if you are aware of this, it may be possible to navigate this.
Further reading:
- A Deep Dive into the Harris-Klein Controversy - John Nerst's Original Post
- Putanumonit - Ties decoupling to mistake/conflict theory
(ht prontab. He actually uses low decoupling/high decoupling, but I prefer avoiding double-negatives. Both John Nerst and prontab passed up the opportunity to post on this topic here)
51 comments
Comments sorted by top scores.
comment by Matt Goldenberg (mr-hire) · 2019-12-07T23:19:59.781Z · LW(p) · GW(p)
The core of this post seems to be this
- Decoupling norms: It is considered eminently reasonable to require your claims to be considered in isolation - free of any context or potential implications. An insistence on raising these issues despite a decoupling request are often seen as sloppy thinking or attempts to deflect.
- Contextualising norms: It is considered eminently reasonable to expect certain contextual factors or implications to be addressed. Not addressing these factors is often seen as sloppy or even an intentional evasion.
As Zack_M_Davis points out in his review, one of the issues with this definition is that there are infinite variations of "context" that can be added in any given situation "I'm wearing a hat while saying this", and there are infinite implications that any given thing you say can have "this implies that the speaker has a mouth and could therefore say the thing.
However, I do think that there is an actual, useful, real distinction here that's both important and doesn't have another thing to describe. The thing I think this is pointing at is "How much you and others are willing to think about the consequences of what is said seperate from its' truth value."
This split is quite important in a community that cares strongly about truth, and strongly about the outcomes of the world, and being able to say "we're in a decoupling space" or "this is a contextualzing conversation" or "I generally support decoupling as an epistemic norm" is quite an important shorthand to point at that thing.
I'd love to see the post cleaned up to make it clear that you're talking about "contextualizing as understanding how your words will have an effect in the context that you're in" and decoupling as "decoupling what you say from the effects it may create."
Replies from: Taran↑ comment by Taran · 2019-12-22T10:02:17.318Z · LW(p) · GW(p)
I'd love to see the post cleaned up to make it clear that you're talking about "contextualizing as understanding how your words will have an effect in the context that you're in" and decoupling as "decoupling what you say from the effects it may create."
I don't think there's a general consensus that this post does, or should, mean that. For example, Raemon's review suggests "jumbled" as an antonym to "decoupled", and gives a description that's more general than yours. For another example, you described your review of Affordance Widths [LW(p) · GW(p)] as a decoupled alternative to the contextualizing reviews that others had already written, but the highest-voted contextualizing review [LW(p) · GW(p)] is explicitly about the truth value of ialdabaoth's post -- it incorporates information about the author, but only to make the claim that the post contains an epistemic trap, one which we could in principle have noticed right away but which in practice wasn't obvious without the additional context of ialdabaoth's bad behavior. This is clearly contextualizing in some sense, but doesn't match the definition you've given here.
I think this post is fundamentally unfinished. It drew a distinction that felt immediate and important to many commenters here, but a year and a half later we still don't have a clear understanding of what that distinction is. I think that vagueness is part of what has made this post popular: everyone is free to fill in the underspecified parts with whatever interpretation makes the most sense to them.
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-12-22T19:23:23.094Z · LW(p) · GW(p)
I think this is valid.
comment by Kaj_Sotala · 2018-05-15T10:42:03.291Z · LW(p) · GW(p)
AAAAAAAAA THIS THING IT HAS A NAME AT LAST!
(ahem.)
comment by Wei Dai (Wei_Dai) · 2019-12-10T14:54:09.835Z · LW(p) · GW(p)
It occurs to me that "free speech", "heterodoxy", and "decoupling vs contextualising" are all related to intelligence vs virtue signaling [LW · GW]. In particular, if you want to do or see more intelligence signaling, then you should support free speech and decoupling norms. If you want to do or see more virtue signaling, then you should support contextualising norms and restrictions on free speech. Heterodox ideas tend to be better (more useful) for intelligence signaling and orthodox ideas better for virtue signaling. (Hopefully this is obvious once pointed out, but I can explain more if not.)
comment by Sniffnoy · 2018-05-15T07:17:48.235Z · LW(p) · GW(p)
Decoupling, orthogonality, unbundling, separation of concerns, relevance, the belief that the genetic fallacy is in fact a fallacy, hugging the query.... :)
Not a new idea, but an important one, and worth writing explicitly about!
Replies from: Chris_Leong↑ comment by Chris_Leong · 2018-05-16T13:11:23.456Z · LW(p) · GW(p)
Any links to where this has already been discussed?
Replies from: Sniffnoycomment by quanticle · 2018-05-14T23:45:00.481Z · LW(p) · GW(p)
Question: is there any reason to use the words "decoupling" rather than "coupling"? It seems to me that "low decoupling" is logically equivalent to "high coupling" and "high decoupling" is logically equivalent to low coupling. So in the spirit of simplification, would it not be better to state the distinction as being between "high coupling" people and "low coupling"?
Replies from: gjm, Sniffnoy↑ comment by gjm · 2018-05-15T01:46:44.830Z · LW(p) · GW(p)
To me, (1) "coupling" suggests specifically joining in pairs much more strongly than "decoupling" suggests specifically detaching pairs and (2) "coupling" suggests that the default state of the things is disconnection, whereas "decoupling" suggests that the default state is connection.
The usual scenario here is that (1) you have lots of things that all relate to one another, and that (2a) most people find it difficult to disentangle, or disapprove of disentangling, and that (2b) all really truly are connected to one another, so that considering them in isolation is a sometimes useful and effective cognitive trick rather than any sort of default.
For all those reasons I think "decoupling" is a better term than "coupling" here. (I also like the opposition decoupling/contextualizing, as found in some of the earlier things Nernst links to, rather than more-decoupling/less-decoupling. When faced with a pile of interrelated things, sometimes you want to decouple them and sometimes you want to pay special attention to the interrelations. It's not as simple as there being some people who are good at decoupling and some who aren't. Though of course most people are bad at decoupling and bad at contextualizing...)
Replies from: Chris_Leong↑ comment by Chris_Leong · 2018-05-15T10:14:54.874Z · LW(p) · GW(p)
Actually, I like Decoupling vs. Contextualising more too, especially as they become single words.
Replies from: Raemon↑ comment by Sniffnoy · 2018-05-15T07:21:09.490Z · LW(p) · GW(p)
I definitely think Nerst has things the right way round, but I'm having trouble making explcit why. One reason though that I can make explicit is that, well, tangling everything together is the default. Decoupling -- orthogonality, unbundling, separation of concerns, hugging the query -- is rarer, takes work, and is worth pointing out.
comment by Raemon · 2019-12-09T23:15:57.144Z · LW(p) · GW(p)
This post seems to be making a few claims, which I think can be evaluated separately:
1) Decoupling norms exist
2) Contextualizing norms exist
3) Decoupling and contextualization norms are useful to think as opposites (either as a dichotomy or spectrum)
(i.e. there are enough people using those norms that it's a useful way to carve up the discussion-landscape)
There's a range of "strong" / "weak" versions of these claims – decoupling and/or contextualization might be principled norms that some people explicitly endorse, or they might just be clusters of tendencies people have sometimes.
In the comments of his response post [LW(p) · GW(p)], Zack Davis noted:
It's certainly possible that there's a "general factor" of contextualizing—that people systematically and non-opportunistically vary in how inferentially distant [? · GW] a related claim has to be in order to not create an implicature that needs to be explicitly canceled if false. But I don't think it's obvious, and even if it's true, I don't think it's pedagogically wise [LW · GW] to use a politically-motivated appeal-to-consequences as the central case of contextualizing.
And, reading that, I think it may actually the opposite – there is general factor of "decoupling", not contextualizing. By default people are using language for a bunch of reasons all jumbled together, and it's a relatively small set of people who have the deliberately-decouple tendency, skill and/or norm, of "checking individual statements to see if they make sense."
Upon reflection, this is actually more in line with the original Nerst article, which used the terms "Low Decoupling" and "High Decoupling", which less strongly conveys the idea of "contextualizer" being a coherent thing.
On the other hand, Nerst's original post does make some claims about Klein being the sort of person (a journalist) who is "definitively a contextualizer, as opposed to just 'not a decoupler'", here:
While science and engineering disciplines (and analytic philosophy) are populated by people with a knack for decoupling who learn to take this norm for granted, other intellectual disciplines are not. Instead they’re largely composed of what’s opposite the scientist in the gallery of brainy archetypes: the literary or artistic intellectual.
This crowd doesn’t live in a world where decoupling is standard practice. On the contrary, coupling is what makes what they do work. Novelists, poets, artists and other storytellers like journalists, politicians and PR people rely on thick, rich and ambiguous meanings, associations, implications and allusions to evoke feelings, impressions and ideas in their audience. The words “artistic” and “literary” refers to using idea couplings well to subtly and indirectly push the audience’s meaning-buttons.
To a low-decoupler, high-decouplers’ ability to fence off any threatening implications looks like a lack of empathy for those threatened, while to a high-decoupler the low-decouplers insistence that this isn’t possible looks like naked bias and an inability to think straight. This is what Harris means when he says Klein is biased.
Although they're interwoven, I think it might be worth distinguishing some subclaims here (not necessarily made by Nerst or Leong, but I think implied and worth thinking about)
- There exist a class of general storytelling contextualists
- There exist PR-people/politicians/activists who wield contextual practice as a tool or weapon.
- There exist "principled contextualizers" who try to evenly come to good judgments that depends on context.
My Epistemic State
Empirical Questions
There's a set of fairly concrete "empirical" questions here, which are basically "if you do a bunch of factor analysis of discussions, would decoupling and/or contextualization and/or any of the specific contextual-subcategories listed above be major predictive power?"
The experiments you'd run for this might be expensive but not very confusing.
I would currently guess:
- "Decoupling factor" definitely exists and is meaningful
- Storytelling contextualists exist and are meaningful (though not necessarily especially useful to contrast with decouplers)
- PR-ists who wield context as tool/weapon definitely exist (and decoupling is often relevant to their plans, so they have developed tools that allow them to modulate the degree to which decoupling fits into the conversational frame)
- I think I could name a few people at least attempting to be "fair, principled contextualists", at least in some circumstances. I am less confident that this is a real thing because the alternative "secretly they're just really effective or subtle PR-ists, either intentionally or not" is a pretty viable alternative
Conceptual Question
I have a remaining confusion, which is something like "what exactly is a contextualizer?". I feel like I have a crisp definition of "decoupling". I don't have that for contextualizers. Are the three subcategories listed above really 'relatives' or are they just three different groups doing different things? Is it meaningful to put these on a spectrum with decouplers on the other side?
mr-hire suggests:
"How much you and others are willing to think about the consequences of what is said separate from its' truth value."
Which sounds like a plausibly good definition, that maybe applies to all three of the subcategories. But I feel like it's not quite the natural definition for each individual subcategory. (Rather, it's something a bit downstream of each category definition)
"Jumbled" vs "Contextual"
"High decoupling" and "low decoupling" are still pretty confusing terms, even if you get rid of any notion of "low decoupling" being a cogent thing. It occured to me, writing this review, that you might replace the word "contextual" with "jumbled".
Contextual implies some degree of principled norms. Jumbled points more towards "the person is using language for a random mishmash of strategies all thrown together." (Politicians might sometimes be best described as "jumbled", and sometimes as "principled" [but, not necessarily good principles, i.e. 'I will deliberately say whatever causes my party to win']).
...
That's what I got for now.
Replies from: Chris_Leong↑ comment by Chris_Leong · 2019-12-10T01:02:30.976Z · LW(p) · GW(p)
I really don't like the term jumbled as some people would likely object much more to being labelled as jumbled than as a contextualiser. The rest of this comment makes some good points, but sometimes less is more. I do want to edit this article, but I think I'll mostly engage with Zack's points and reread the article.
Replies from: Raemon↑ comment by Raemon · 2019-12-10T01:22:38.018Z · LW(p) · GW(p)
The OP comment was optimizing for "improving my understanding of the domain" more than direct advice of how to change the post.
(I'm not necessarily expecting the points and confusions there to resolve within the next month – it's possible that you'll reflect on it a bit and then figure out a slightly different orientation to the post, that distills the various concepts into a new form. Another possible outcome is that you leave the post as-is for now, and then in another year or two after mulling things over someone writers a new post doing a somewhat different thing, that becomes the new referent. Or, it might just turn out that my current epistemic state wasn't that useful. Or other things)
Re: "Jumbled"
I think there's sort of a two-step process that goes into naming things (ironically, or appropriate, which map directly onto the post) – first figuring out "okay what actually is this phenomenon, and what name most accurately describes it?" and then, separately, "okay, what sort of names are going to reliably going to make people angry and distract from the original topic if you apply it to people, and are there alternative names that cleave closely to the truth?"
(my process for generating names in that risk offending is something like a multi-step Babble and Prune, where I generate names aiming to satisfice on "a good explanation of the true phenomenon" and "not likely to be unnecessarily distracting", until I have a name that satisfies both criteria)
I haven't tried generating a maximally good name for Jumbled yet since I wasn't sure this was even carving reality the right way.
But, like, it's not an accident that 'jumbled' is more likely to offend people than 'contextualized'. I do, in fact, think worse of people who have jumbled communication than deliberately contextualized communication. (compare "Virtue Signalling", which is an important term but is basically an insult except among people who have some kind of principled understanding that "Yup, it turns out some of the things I do had unflattering motives and I've come to endorse that, or endorse my current [low] degree of prioritizing changing it.")
I am a conversation consequentialist and think it's best to find ways of politely pointing out unflattering things about people in ways that don't make them defensive. But, it might be that the correct carving of reality includes some unflattering descriptions of people and maybe the best you can do is minimize distraction-damage.
comment by Zack_M_Davis · 2019-12-06T03:45:55.436Z · LW(p) · GW(p)
Reply: "Relevance Norms; Or, Gricean Implicature Queers the Decoupling/Contextualizing Binary" [LW · GW] (further counterreplies in the comment section)
I argue that this post should not be included in the Best-of-2018 compilation.
comment by orthonormal · 2019-12-07T22:00:13.717Z · LW(p) · GW(p)
I wish John Nerst could be convinced to crosspost here.
comment by Raemon · 2019-11-21T03:56:47.923Z · LW(p) · GW(p)
Two years later, the concept of decoupled vs contextualizing has remained an important piece of my vocabulary.
I'm glad both for this distillation of Nerst's work (removing some of the original political context that might make it more distracting to link to in the middle of an argument), and in particular for the jargon-optimization that followed ("contextualized" is much more intuitive than "low-decoupling.")
This post has been object-level useful, for navigating particular disagreements. (I think in those cases I haven't brought it up directly myself, but I've benefited from a sometimes-heated-discussion having access to the concepts).
I think it's also been useful at a more meta-level, as one of the concepts in my toolkit that enable me to think higher level thoughts [LW · GW] in the domain of group norms and frame disagreements [LW · GW]. A recent facebook discussion was delving into a complicated set of differences in norms/expectations, where decoupled/contextualizing seemed to be one of the ingredients but not the entirety. Having the handy shorthand and common referent allowed it to only take up a single working-memory slot while still being able to think about the other complexities at play.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2019-11-21T05:18:56.425Z · LW(p) · GW(p)
This post has been object-level useful, for navigating particular disagreements.
Can you give specific examples? I've basically only seen "contextualizing norms" used as a stonewalling tactic [LW · GW], but you've probably seen discussions I haven't.
Replies from: Raemon↑ comment by Raemon · 2019-11-21T06:03:36.929Z · LW(p) · GW(p)
The most recent example was this facebook thread. I'm hoping over the next week to find some other concrete examples to add to the list, although I think the most of the use cases here were in hard-to-find-after-the-fact-facebook-threads.
Note that much of the value add here is being able to succinctly talk about the problem, sometimes saying "hey, this is a high-decoupling conversation/space, read this blogpost if you don't know what that means".
I don't think I've run into people citing "contextualizing norms" as a reason not to talk about things, although I've definitely run into people operating under contextualizing norms in stonewally-ways without having a particular name for it. I'd expect that to change as the jargon becomes more common though, and if you have examples of that happening already that'd be good to know.
(Hmm – Okay I guess it'd make sense if you saw some of our past debates as something like me directly advocating for contextualizing, in a way that seemed harmful to you. I hadn't been thinking there through the decoupled/contextualized lens, not quite sure if the lens fits, but might make sense upon reflection)
It still seems like having the language here is a clear net benefit though.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2019-11-22T06:23:35.963Z · LW(p) · GW(p)
as the jargon becomes more common
If the jargon becomes more common. (The Review Phase hasn't even started yet!) I wrote a reply explaining in more detail why I don't like this post [LW · GW].
Replies from: Raemoncomment by [deleted] · 2018-05-16T10:48:28.388Z · LW(p) · GW(p)
I think this article is a considerable step forward, but it could benefit from some examples. I think I have a pretty good idea what this is about (and share the horror of being called out by a low-decoupler for being some kind of ism), but still.
Replies from: Chris_Leong↑ comment by Chris_Leong · 2018-05-16T13:18:21.773Z · LW(p) · GW(p)
Hmm, well the article has an example, but it is super long and I'm trying to avoid this becoming political. Any suggestions for examples?
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2018-05-18T23:42:58.324Z · LW(p) · GW(p)
The example you use is already CW-enough that high-decouplers may be suspicious or hostile of the point you are trying to make.
Then again, maybe anything elsewould be too far removed from our shared experience that it wouldn't serve as a quick and powerful illustration of your point.
Here are some suggestions made with both of these points in mind:
--The original example Scott uses about a Jew in future Czarist Russia constantly hearing about how powerful Jews are and how evil Israel is.
--Flipping the script a bit, how about an example in which someone goes around saying "86% of rationalists are straight white men" (or something like that, I don't know the actual number).
--Or: "Effective Altruists are usually people who are biased towards trying to solve their problems using math."
Come to think of it, I think including one of those flip-script examples would be helpful in other ways as well.
comment by Ruby · 2019-12-25T21:32:34.292Z · LW(p) · GW(p)
At the same time, I don't want to fall for the Fallacy of the Undistributed Middle and assume that both perspectives are equally valid.
Minor possible quibble: based on the definition in the link given, I think Fallacy of the Undistributed Middle doesn't refer to assuming a the deep wisdom position that two sides of a debate each have merit.
The fallacy of the undistributed middle (Lat. non distributio medii) is a formal fallacy that is committed when the middle term in a categorical syllogism is not distributed in either the minor premise or the major premise. It is thus a syllogistic fallacy.)
comment by Matt Goldenberg (mr-hire) · 2019-11-21T21:19:25.011Z · LW(p) · GW(p)
This is one of the Major splits I see in norms on LW (the other being combat vs. Nurture). Having a handy tag for this is quite useful for pointing at a thing without having to grasp to explain it.
comment by Kaj_Sotala · 2019-11-21T11:19:29.094Z · LW(p) · GW(p)
My nomination seconds the things that were said in the first paragraphs of Raemon's nomination [LW(p) · GW(p)].
comment by Martin Sustrik (sustrik) · 2018-05-27T06:41:04.415Z · LW(p) · GW(p)
Isn't it just the distinction between how facts are used in scientific discourse (you state a fact, expect it to be confirmed or challenged) vs. how they are used in political discourse (carefully select other fact to augment it and suit your political narrative)? I guess Umberto Eco would have had something to say about that.
comment by Michael Pye (michael-pye) · 2018-05-30T11:59:06.279Z · LW(p) · GW(p)
I can see the models usefulness but I think you are impling they are equal. This seems wrong. Decoupling required more specific knowledge and concentration and is more analogous to Kahneman's slow thinking. Obvious context can be thought of in a rich way but only after the individual ideas have been richly defined (often in previous thoughts). We cycle between the two approaches but I feel decoupling required the greater focus and is lacking when we discuss topics we are unfamiliar with (and a possible link with the Dunning Kruger effect). My understanding of cognitive load theory from education (limited working memory) also seems relavent. By limiting context we can more intensely analyse each aspect, repackage them efficiently and accurately before returning contextual information the whole problem. This seems to me the classic method of enlightenment thinking. Obviously understanding why other people might misunderstand d this is important however that is an argument around politics, rhetoric and persuasion not about clarity of thought.
Finally pure contextualisation is only about outcomes and decoupling only about process. Without understanding alternate approaches (which requires specialist knowledge) our assessment of the best method to achieve our outcome is likely flawed.
comment by zulupineapple · 2018-05-15T13:37:25.729Z · LW(p) · GW(p)
Since this comes from the Harris-Klein debate, I should point to this [LW · GW] recent post and especially the comments underneath it. To summarize, the "high decoupler" Harris is making errors. This is, of course, what happens when you ignore the context of the real world.
Now, there are perhaps better examples of disagreement between "high decouplers" and "low decouplers", and perhaps those are still meaningful categories. But I'd be weary of conclusions made with high decoupling.
I propose an alternative view, where "low decoupling" is the objectively correct way to look at the world, and "high decoupling" is something you do because you're lazy and unwilling to deal with all the couplings of the real world.
Replies from: Chris_Leong↑ comment by Chris_Leong · 2018-05-15T23:04:39.883Z · LW(p) · GW(p)
""high decoupling" is something you do because you're lazy and unwilling to deal with all the couplings of the real world" - I suspect you don't quite understand what high decoupling is. Have you read Local Validity as a Key to Sanity and Civilisation [LW · GW]? High decoupling conversations allow people to focus on checking the local validity of their arguments.
Replies from: zulupineapple↑ comment by zulupineapple · 2018-05-16T09:05:11.165Z · LW(p) · GW(p)
High decoupling conversations allow people to focus on checking the local validity of their arguments.
We unfortunately live in a world where sometimes A implies C, but A & B does not imply C, for some values of A, B, C. So, if you're talking about A and C, and I bring up B, but you ignore it because that's "sloppy thinking", then that's your problem. There is nothing valid about it.
I suspect you don't quite understand what high decoupling is.
High decoupling is what Harris is doing in that debate. What he is doing is wrong. Therefore high decoupling is wrong (or at least unreliable).
I get the feeling that maybe you don't quite understand what low decoupling is? You didn't say anything explicitly negative about it, but I get the feeling that you don't really consider it a reasonable perspective. E.g. what is the word "empathy" doing in your post? It might be pointing to some straw man.
Replies from: Benito, Chris_Leong↑ comment by Ben Pace (Benito) · 2018-05-17T16:58:17.773Z · LW(p) · GW(p)
High decoupling is what Harris is doing in that debate. What he is doing is wrong. Therefore high decoupling is wrong (or at least unreliable).
Upvoted for going all-in on a low-decoupling norm - I can't tell whether that was intentionally funny or you're genuinely living life by low-decoupling norms.
(Either way I think you're passing the ITT for low-decoupling, so thanks.)
Replies from: zulupineapple↑ comment by zulupineapple · 2018-05-17T18:39:31.364Z · LW(p) · GW(p)
If you think there is something funny about low decoupling, then you're probably strawmanning it. Or maybe it was a straw man all along, and I'm erroneously using that term to refer to something real.
or you're genuinely living life by low-decoupling norms.
I can't say that I do. But I try to. Because high decoupling leads to being wrong.
↑ comment by Chris_Leong · 2018-05-16T13:13:36.974Z · LW(p) · GW(p)
"We unfortunately live in a world where sometimes A implies C, but A & B does not imply C, for some values of A, B, C. So, if you're talking about A and C, and I bring up B, but you ignore it because that's "sloppy thinking", then that's your problem. There is nothing valid about it." - What kind of "implies" are you talking about? Surely not logical implications, but rather the connotations of words? If so, I think I know what I need to clarify.
I didn't comment on what norms should be in wider society, just that low decoupling spaces are vital. I was going to write this in my previous comment, but I had to run out the door. John Nerst explains "empathy" much more in his post.
Replies from: zulupineapple↑ comment by zulupineapple · 2018-05-16T17:40:20.668Z · LW(p) · GW(p)
I'm talking about the kind of "X implies Y" where observing X lead us to believe that Y is also likely true. For example, take A="wet sidewalk" and C="rain". Then A implies C. But if B="sprinkler", then A&B no longer imply C. You may read this [LW · GW], also by Elizer and somewhat relevant.
John Nerst explains "empathy" much more in his post.
Yes, I've read that, and he is also strawmanning. Lack of empathy is not the problem with what Harris is saying. Did you read the comments I linked to? Or should I have quoted them here?
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-05-16T19:38:16.385Z · LW(p) · GW(p)
For example, take A=“wet sidewalk” and C=“rain”. Then A implies C.
But this is false. That was Eliezer’s whole point.
Replies from: zulupineapple↑ comment by zulupineapple · 2018-05-17T18:29:50.632Z · LW(p) · GW(p)
When B is not known or known to be false, A implies C, and, when it is know to be true, A&B do not imply C. Surely we have no actual disagreement here, and I only somehow managed to be unclear that before I introduced B, it wasn't known?
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-05-17T18:54:52.617Z · LW(p) · GW(p)
When B is not known … A implies C
No. This is wrong. This is what I am saying: when B is not known, A does not imply C. A can only imply C if B is known to be false.
Edit: In other words, A -> (B ∨ C).
Edit 2: Spelling it out in more detail:
- A ⇒ (B ∨ C)
- (A ⇒ (B ∨ C)) ∧ A) ⇏ C
- (A ⇒ (B ∨ C)) ∧ (A ∧ ¬B)) ⇒ C
↑ comment by Ben Pace (Benito) · 2018-05-17T19:41:45.741Z · LW(p) · GW(p)
Zulupineapple I feel like Said is trying to give a first lesson in propositional logic, a setting where all his statements are true. Were you trying to use the colloquial/conversational meaning of the word 'implies'?
Replies from: zulupineapple↑ comment by zulupineapple · 2018-05-17T19:48:59.940Z · LW(p) · GW(p)
Yes, I explicitly said so earlier. And propositional logic makes no sense in this context. So I don't understand where the confusion is coming from. But if you have advice on how I could have prevented that, I'd appreciate it. Is there a better word for "implies" maybe?
↑ comment by zulupineapple · 2018-05-17T19:45:32.245Z · LW(p) · GW(p)
Maybe you're talking about the usual logic? I explained in the very comment you first responded to, that by "X implies Y" I mean that "observing X lead us to believe that Y". This is a common usage, I assume, and I can't think of a better word.
And, if you see a wet sidewalk and know nothing about any sprinklers, then "rain" is the correct inference to make (depending on your priors). Surely we actually agree on that?
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-05-17T19:57:13.194Z · LW(p) · GW(p)
Yes, I saw your definition. The standard sort of generalization of propositional logic to probabilistic beliefs does not rescue your claims.
And, if you see a wet sidewalk and know nothing about any sprinklers, then “rain” is the correct inference to make (depending on your priors). Surely we actually agree on that?
No. If you’re leaving propositional logic behind and moving into the realm of probabilistic beliefs, then the correct inference to make is to use the information you’ve got to update from your priors to a posterior probability distribution over the possible states of the world. This is all standard stuff and I’m sure you know it as well as I do.
The outcome of this update may well be “P(rain) = $large_number; P(other things, such as sprinklers, etc.) = $smaller number”. You would, of course, then behave as if you believed it rained (more or less). (I am glossing over details, such as the overlap in P(sprinkler, etc.) and P(rain), as well as the possibility of “hybrid” behaviors that make sense if you are uncertain between two similarly likely possibilities, etc.; these details do not change the calculus.)
Characterizing this as “A implies C, but (A ∧ B) does not imply C” is tendentious in the extreme (not to mention so gross a simplification that it can hardly be evaluated as coherent view).
Now, you might also be claiming something like “seeing a wet sidewalk does increase P(rain), but does not increase P(sprinkler)”. The characterization quoted in the above paragraph would be consistent with this claim. However, this claim is obviously wrong, so I assumed this wasn’t what you meant.
Replies from: zulupineapple↑ comment by zulupineapple · 2018-05-17T20:31:19.454Z · LW(p) · GW(p)
So when I said "rain is the correct inference to make", you somehow read that as "P(rain) = 1"? Because I see no other explanation why you felt the need to write entire paragraphs about what probabilities and priors are. I even explicitly mentioned priors in my comment, just to prevent a reply just like yours, but apparently that wasn't enough.
Characterizing this as “A implies C, but (A ∧ B) does not imply C” is tendentious in the extreme (not to mention so gross a simplification that it can hardly be evaluated as coherent view).
Ok. How do you think I should have explained the situation? Preferably, in less than four paragraphs?
I personally find my explanation completely clear, especially since I expected most people to be familiar with the sidewalk/rain/sprinkler example, or something similar. But then I'm aware that my judgements about clarity don't always match other people's, so I'll try to take your advice seriously.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-05-17T20:53:38.812Z · LW(p) · GW(p)
How do you think I should have explained the situation? Preferably, in less than four paragraphs?
Assuming that “the situation” in question is this, from upthread—
We unfortunately live in a world where sometimes A implies C, but A & B does not imply C, for some values of A, B, C.
I would state the nearest-true-claim thus:
“Sometimes P(C|A) is very low but P(C|A,B) is much higher, enough to make it the dominant conclusion.”
Edit: Er, I got that backwards, obviously. Corrected version:
“Sometimes P(C|A) is very high, enough to make it the dominant conclusion, but P(C|A,B) is much lower [this is due to the low prior probability of B but the high conditional probability P(C|B)]”.
Replies from: zulupineapple↑ comment by zulupineapple · 2018-05-18T07:28:52.961Z · LW(p) · GW(p)
Ok, that's reasonable. At least I understand why you would find such explanation better.
One issue is that I worry about using the conditional probability notation. I suspect that sometimes people are unwilling to parse it. Also the "very low" and "much higher" are awkward to say. I'd much prefer something in colloquial terms.
Another issue, I worry that this is not less confusing. This is evidenced by you confusing yourself about it, twice (no, P(C|B), or P(rain|sprinkler) is not high, and it doesn't even have to be that low). I think, ultimately, listing which probabilities are "high" and which are "low" is not helpful, there should be a more general way to express the idea.
Replies from: michael-pye↑ comment by Michael Pye (michael-pye) · 2018-05-30T16:10:15.566Z · LW(p) · GW(p)
Zulupineapple do you release you have been engaged in a highly decoupled argument? Your point (made way back) that values contextual conclusions is valid but decoupling is needed to enhance those conclusions and as it is harder by an order of magnitude requires more practice and knowledge.
Personally I feel the terms abstract and concrete are more useful. Alternating between the two and refining the abstract ideas before applying them to concrete examples.