The Signal and the Corrective

post by MalcolmOcean (malcolmocean) · 2018-02-11T00:28:35.759Z · LW · GW · 3 comments

This is a link post for https://everythingstudies.com/2017/12/19/the-signal-and-the-corrective/

Contents

3 comments

This is one of the most important articles I've read in awhile. It makes a generalization from something Scott pointed out in this SSC post.

Here are a few excerpts, but it's worth clicking through and reading the whole thing!

From the inside, when you subscribe to a narrative, when you believe in it, it feels like you’ve stripped away all irrelevant noise and only the essence, The Underlying Principle, is left — the signal, in the language of information theory. However, that noise you just dismissed as irrelevant has other signals in it and sometimes people will consider them stronger, truer and more important.

The core of the concept:

Most people have somewhat moderate views and they recognize that there is a bit of truth to both of two apparently opposing narratives. This can mask fundamental differences between those appearing to be in agreement.
Like, look at this zebra:
We can all agree on what it looks like. But some of us will think of it as a white horse with black stripes and some as a black horse with white stripes, and while it doesn’t actually matter now, that might change if whether “zebras are fundamentally white” or “zebras are fundamentally black” ever becomes an issue of political importance.
In the real world zebras are (thank God) still politically neutral, but similar patterns exist. Two people with political views like:
“The free market is extremely powerful and will work best as a rule, but there are a few outliers where it won’t, and some people will be hurt so we should have a social safety net to contain the bad side effects.”
and
“Capitalism is morally corrupt and rewards selfishness and greed. An economy run for the people by the people is a moral imperative, but planned economies don’t seem to work very well in practice so we need the market to fuel prosperity even if it is distasteful.”
. . . have very different fundamental attitudes but may well come down quite close to each other in terms of supported policies. If you model them as having one “main signal” (basic attitude) paired with a corrective to account for how the basic attitude fails to match reality perfectly, then this kind of difference is understated when the conversation is about specific issues (because then signals plus correctives are compared and the correctives bring “opposite” people closer together) but overstated when the conversation is about general principles — because then it’s only about the signal.

Click through to read the whole article, which includes a ton of great examples (concrete and abstract).

3 comments

Comments sorted by top scores.

comment by Chris_Leong · 2018-02-11T10:30:22.530Z · LW(p) · GW(p)

I've spent quite a bit of time thinking about why arguments are so hard to resolve and I suspect that very often the greatest challenge comes from how people's conclusions often depend on these widescale generalisations. When talking about specific facts, disagreements are often tractable, but when talking about widescale generalisation, it is often impossible to make any significant progress as doing so would require you to litigate hundreds of separate facts. For example, "Are markets generally good?", I suspect that we can all think of dozens of examples of corporations behaving badly, as well as many places where competive markets are good. It really isn't easy to try to balance all of these considerations against each other, so when we get to the question of weighing it all, it becomes highly subjective.

comment by habryka (habryka4) · 2018-02-11T01:10:16.609Z · LW(p) · GW(p)

+1 for extracting key quotes from the link!

comment by Marin H (marin-h) · 2018-02-11T18:09:20.570Z · LW(p) · GW(p)

I feel like the figure-ground idea is useful, but this post runs with it a little too far.

On the one hand, people definitely do have background assumptions about the overall goodness or badness of a thing, and conversations can be unproductive if the participants debate details without noticing how different their assumptions are. The figure-ground inversion is a good metaphor for the kind of shift in perspective you need to get a high-level look at seemingly contradictory models.

On the other hand though. Conversations about details are how people build their models in the first place. People usually aren't reasoning from first principles, they're taking in tons of information and making tiny updates to their models over time. And the majority of people end up not as pro- or anti- Thing zealots, but as people with complex models of Thing, amenable to further updates. To me, this looks like collective epistemology working pretty well, and doesn't support the post's claim that our failure to explicate background assumptions is a major problem with discourse.

"If you’re with someone with an opposite signal, you prioritize boosting your own signal and ignore your own corrective that actually agrees with the other person. However, when talking to someone who agrees with your signal you may instead start to argue for your corrective. And if you’re in a social environment where everyone shares your signal and nobody ever mentions a corrective you’ll occasionally be tempted to defend something you don’t actually support (but typically you won’t because people will take it the wrong way)."

Again, this seems to describe people who are doing just fine, who understand the need for nuance and can take different approaches according to social context. These don't seem like the behaviors of people who are too fundamentalist about their assumptions.