Being Correct as Attire
post by Unreal · 2017-10-24T10:04:10.703Z · LW · GW · 0 commentsContents
No comments
[ Related to: Belief as Attire, Lost Purposes, Something to Protect ]
Having correct beliefs is Important, with a capital I.
And yet, this too, feels susceptible to Goodhart's Law.
Imagine being in a middle of a group conversation. They seem to be having an intellectual discussion. Maybe on Bitcoin or AI or EA. Whatever it is.
All seems to be going as expected. But at the corner of your attention, you sense something is off.
People are being a little too eager to correct each other's factual errors. When it's unclear which fact is correct, people keep talking about it for another 5 or 10 minutes, before someone suggests looking it up. It takes another 5 minutes for anyone to actually look it up.
At other times, as soon as there is a hint of a factual ambiguity, someone leaps to the internet, phone appearing magically in hand. You fail to question this action, thinking it virtuous.
People seem to be talking about important, scientific-sounding things. But you also have a vague sense of disconnectedness, as though this conversation will have no impact on tomorrow or the next day. You walk away without any new updates.
You hear a very similar conversation on the topic at the next party, with largely the same facts. Same opinions. Same disagreements.
Having correct beliefs is Important, so as soon as I notice an error, I correct it, right?
This is not totally unlike saying, Getting the right answer is Good, so I should try to get right answers on my tests, right?
I'm not saying truth-seeking is a purely instrumental process.
Truth for its own sake, yes.
Correctness for its own sake, no.
Getting correct beliefs is only a subset of truth-seeking. If the only thing you try for is correct beliefs, it is likely to fall to Lost Purposes.
But how can you tell whether it's connected to the larger goal or not?
One way I can try to tell is to check for model-based reasons the facts are being brought up. Is there a model under consideration?
Maybe it's not explicitly being discussed but is lurking close-by underneath. I can try to directly ask, "Why is this relevant?" or I can poke at it, "Is the reason you brought this up because you expect the price of Bitcoin to rise? Or maybe you're considering buying Ethereum instead?"
One reason it's sometimes recommended to Double Crux on actions rather than beliefs, is that a disagreement on actions is more likely to involve models.
It is easy for beliefs, opinions, and facts to get away with floating unanchored by a model. If this is happening, it's more likely we're also unanchored from truth-seeking and may have fallen into "being right."
How can I tell that I, personally, am working with a model? A model lets me make predictions and inferences. A model produces "tastes"—intuitive senses that draw me towards or away from things in the world. If I engage my Inner Simulator, booted up with a model, it will produce expectations or at least positive or negative feelings that point towards my expectations—"gut reactions". Models also produce curiosities and questions—my models often have holes, and when I notice a relevant hole, I get curious in order to fill it.
//
Striking closer to the core, there is a Reason that having correct beliefs is Important with a capital I, and being connected to that Reason is what will help you stay on course.
It is hard to pinpoint and name the Reason. (I was kind of surprised to find this. It feels like it's supposed to be obvious.)
There's a bunch of social things I'm motivated by:
People being nice to me
People saying good things about me
People liking my ideas
People appreciating me and paying attention to me
Feeling better than other people
These often feel relevant and important too. But it's not one of those.
It's closer to...
You remember the last scene in Ender's Game? (Spoilers ahead)
...
When Ender was fighting the final battle against the buggers?
And he didn't realize he was even doing that until afterward.
There was a way it was important to be right—right about the territory, in those moments. And it was pretty much ingrained in his bones, this importance, even without knowing the larger consequences of the situation. But he also knew which things were more important to be right about and thus was able to ignore the things that weren't important.
That said, maybe he should've noticed his confusion at the different setup, the shift in atmosphere. He managed to get deeply connected to one layer of relevance but was disconnected from the one wrapped around it.
//
OK, it's possible this was all obvious.
Here are a few things that might be less immediate.
1. In a collaborative truth-seeking environment, how confidently you present should correlate with how much model you're working with. It should also correlate with how relevant your model is to the thing at hand. So you have your own internal models of the world and also a model of what's going on in the conversation / what other people care about / why are we talking about this. And your confidence should signal how well you're calibrated with both those.
2. Being wrong in front of other people isn't supposed to be embarrassing. Being wrong about a thing is a moment to check whether you might have been even more wrong than you'd been signaling. Being significantly more wrong than you were signaling is embarrassing. Being wrong about irrelevant facts is zero embarrassing, and being able to tell which facts are the ones worth getting embarrassed over is closer to the thing.
3. If checking for models turns into yet another way to assess whether people are good/bad/doing it right/whatever, it's still missing the point. It's not about assessing people (this is just a useful proxy). It's about trying to tell what we're trying to do at any given moment and whether we've lost the way and also whether our group cognition is properly online.
4. It's okay to have conversations that aren't about truth-seeking and are just about sharing facts or news or trying to look smart or whatever. It's just good to keep them as separate bins in your mind, and hopefully no one else is getting confused. I think it's actively good to have the latter kind of conversation.
5. It is a bit dangerous for people to reveal how little model they actually have—they might get excluded for this reason. Or, even if they have models, they might get excluded if they continuously fail to convey this, for whatever reason. So there is a serious reason to avoid a super-frank truth-seeking situation where it becomes clear to everyone what models each person is bringing to the table. (I feel this anxiety a lot.)
6. The conflation between 'making contributions' and 'worth as a human being' causes a lot of pain. It gets worse the less trust there is, but larger inferential gaps tend to lower ability to build trust, and closing inferential gaps is hard and time-consuming ... but it's hard to trust that as an excuse. I think this is one of the Hard Problems.
[ Originally posted on Facebook, contains minor edits ]
0 comments
Comments sorted by top scores.