Trust-Building: The New Rationality Project
post by DirectedEvolution (AllAmericanBreakfast) · 2020-05-28T22:53:36.876Z · LW · GW · 15 commentsContents
15 comments
“I did not write half of what I saw, for I knew I would not be believed"
-Marco Polo
It's enlighteningly disturbing to see specifically how the "distrust those who disagree" heuristic descends into the madness of factions.
-Zack M Davis [LW · GW]
Old LessWrong: we fail to reach the truth because of our cognitive biases.
New LessWrong: we fail to reach the truth because of reasonable mistrust.
What causes mistrust? It's a combination of things: miscommunication and mistakes; the presence of exploitative actors; lack of repeated trust-building interactions with the same people; and high payoffs for epistemically unvirtuous behavior. Enough mistrust can destroy our ability to converge on the truth.
Once mistrust has led us into factionalism, can we escape it?
Factionalism is not a bad thing. It means that we are willing to accept some evidence from other people, as long as it's not too divergent from our priors. Factions are worse than unity, but better than isolation.
What we want isn't a lack of factionalism, it's unity.
This suggests an activism strategy. Let's form three categories of factions:
- Your Community: You have high trust in this network, and believe the evidence you receive from it by default. Although you don't trust everyone who calls themselves a part of this network or community, you know who belongs and who doesn't, who's reliable and who's not, and how to tell the difference when you meet someone new in the network.
- Strangers: This network is untested, and your community doesn't have a strong opinion on it. It would take substantial work to learn how to navigate this foreign network. But because they are relatively isolated from your own community, they have evolved a different constellation of evidence. The existence of factionalism is itself evidence that you'd have something new to gain by trading information with them.
- Enemies: This network has been examined, either by you or by your community, and labeled a toxic breeding ground of misinformation. Treating your enemies as though they were merely strangers would only alienate you from your own community. They might be exploitative or stupid, but either way, engaging with them can only make things worse. All there is to do is fight, deprogram, or ignore this lot.
To increase unity and pursue the truth, your goal is to find foreign communities, and determine whether they are friendly or dangerous.
Note that the goal is not to make more friends and get exposed to new ideas. That's a recipe for naivete. The real goal is to accurately distinguish strangers from enemies, and make introductions and facilitate sharing with only the stranger, but not the enemy. We might respect, disparage, or ignore our enemies, but we know how to tell them apart from our allies and our own:
“Here people was once used to be honourable: now they are all bad; they have kept one goodness: that they are greatest boozers.”
One of the many difficulties is the work it takes to discover, evaluate, and bring back information about new strangers, the truly foreign. Their names and ideas are most likely unknown to anybody in your community, and they speak a different language or use different conceptual frameworks.
Worse, your community is doing a steady business in its own conceptual framework. You don't need to just explain why the new people you've discovered are trustworthy; you need to explain why their way of thinking is valuable enough to justify the work of translating it into your own language and conceptual framework, or learning their language.
Luckily, you do have one thing on your side. Foreign communities usually love it when strangers express a genuine interest in absorbing their ideas and spreading them far and wide.
You might think that there is a time to explore, and a time to move toward a definite end. But this isn't so.
When there's a definite end in mind, moving toward it is the easy part.
But meaning and value mostly come from novelty.
When it feels like there's no need to explore, and all you need to do is practice your routine and enjoy what you have, the right assumption is that you are missing an opportunity. This is when exploration is most urgent. "What am I missing?" is a good question.
“You will hear it for yourselves, and it will surely fill you with wonder.”
What is our community reliably missing?
15 comments
Comments sorted by top scores.
comment by Vaniver · 2020-05-29T17:27:55.733Z · LW(p) · GW(p)
What we want isn't a lack of factionalism, it's unity. ... You have high trust in this network, and believe the evidence you receive from it by default.
I think one of the ways communities can differ are the local communication norms. Rather than saying something like "all communities have local elders whose word is trusted by the community", and then trying to figure out who the elders are in various communities, you can try to measure something like "how do people react to the statements of elders, and how does that shape the statements elders make?". In some communities, criticism of elders is punished, and so they can make more loose or incorrect statements and the community can coordinate more easily (in part because they can coordinate around more things). In other communities, criticism of elders is rewarded, and so they can only make more narrow or precise statements, and the community can't coordinate as easily (in part because they can coordinate around fewer, perhaps higher quality, things).
It seems to me like there's a lot of value in looking at specific mechanisms there, and trying to design good ones. Communities where people do more reflexive checking of things they read, more pedantic questions, and so on do mean "less trust" in many ways and do add friction to the project of communication, but that friction does seem asymmetric in an important way.
Replies from: AllAmericanBreakfast↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2020-05-29T19:53:10.026Z · LW(p) · GW(p)
It sounds like you're suggesting that there's a tension between trust and trustworthiness. A trusting community will have an easier time coordinating action, but may make bad decisions due to a lack of critical thinking. By contrast, a critical community will have a hard time coordinating, but when it does succeed, its projects will tend to be more trustworthy.
If this is an accurate interpretation, I agree.
I also think that the problem of factions that I'm describing fits with this. Mistrust in other people's basic intuitions causes the rise of intellectual factions, each of which has high in-ground trust and consequently low trustworthiness:
Different intuitions -> mistrust -> assuming contrary evidence is based on a flawed process -> factions -> uncritical in-group trust.
comment by romeostevensit · 2020-05-29T00:30:08.909Z · LW(p) · GW(p)
Wealthy people are more reliable because on average reliability is one of the inputs to wealth building
Wealthy people are less reliable because they can afford to blow things off if at any given moment their BATNA exceeds their agreement (lower switching costs as a fraction of their wealth + less concern about future problems from defecting)
Hard to predict which dominates...
I frame it like this because rationalists are, globally speaking, quite wealthy. And I expect that people are about as trustworthy as they've found they need to be. If you are wealthy, and excessively trustful, your excess trust will get arbitraged down. People are quite proud of not falling for ingroup outgroup dynamics. But ingroups are also well kept gardens, where trust doesn't get exploited and thus high trust value chains can get constructed. Later, once fully fleshed out and understood, such chains can be exported to lower trust environments via formal rules. But their formation is fragile.
comment by Vaniver · 2020-05-29T17:11:13.882Z · LW(p) · GW(p)
When it feels like there's no need to explore, and all you need to do is practice your routine and enjoy what you have, the right assumption is that you are missing an opportunity. This is when exploration is most urgent.
I think good advice is often of the form "in situation X, Y is appropriate"; from a collection of such advice you can build a flowchart of observations to actions, and end up with a full policy.
Whenever there is a policy that is "regardless of the observation, do Y", I become suspicious. Such advice is sometimes right--it may be the case that Y strictly dominates all other options, or it performs well enough that it's not worth the cost of checking whether you're in the rare case where something else is superior.
Is the intended reading of this "exploitation and routine is never correct"? Is exploration always urgent?
Replies from: AllAmericanBreakfast↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2020-05-29T19:27:09.594Z · LW(p) · GW(p)
It's that exploration should always be a component of your labor, even when you are more heavily weighting exploitation and routine. Even if you like your job, spend some time every year pondering alternatives.
Exploration and exploitation can also be nested. For example, the building of the hydrogen bomb was a research program (exploratory) but was a large-scale commitment to a formal project (exploitation) in which one scientist took the time to think about whether the bomb might like the air on fire (exploration). It's probably not a good idea for a project to focus all its energies on execution and exploitation, without setting aside some resources to considering alternatives and issues.
comment by Gordon Seidoh Worley (gworley) · 2020-05-29T16:36:31.587Z · LW(p) · GW(p)
To increase unity and pursue the truth, your goal is to find foreign communities, and determine whether they are friendly or dangerous.
This is an interesting framing I hadn't considered. I usually think of it more as just an exploration to find what perspectives communities/traditions/etc. offer in an ongoing hermeneutical search for the truth, digging into the wrong models other people have to get a bit less wrong myself, which seems pretty compatible with your take.
Replies from: AllAmericanBreakfast↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2020-05-29T19:57:30.904Z · LW(p) · GW(p)
Yes, that seems very much along the lines I'm setting out!
I think part of where my take goes a level deeper is pointing out that you're probably selective in which perspectives you seek out. There are probably some you wouldn't touch with a ten foot pole. Others that you would, but in secret, because your primary trust-networks would find them dangerous. And some that are in that sweet spot of being foreign enough to be novel for your trust-network, but also friendly enough to be accepted. It's this latter category that I'm suggesting is most valuable to find, in the spirit of reconciling factionalism to move toward truth.
comment by Donald Hobson (donald-hobson) · 2020-05-29T11:51:41.057Z · LW(p) · GW(p)
I don't think that factionalism is caused solely by mistrust. Mistrust is certainly a part of the picture, but I think that interest in different things is also a part. Consider the factions around two substantially different academic fields, like medival history and pure maths. The mathmaticians largely trust that the historians are usually right about history. The historians largely trust that the mathmaticians are usually right about maths. But each field is off pursuing its own questions with very little interest in the other.
What we want isn't a lack of factionalism, it's unity.
I am not sure we do want unity. Suppose we are trying to invent something. Once one person anywhere in the world gets all the pieces just right, then it will be obviously good and quickly spread. You want a semiconductor physics and a computer science faction somewhere in the world to produce smartphones. These factions can and do learn from the maths and chemistry factions, the factions they don't interact with are either adversarial or irrelevant.
Replies from: ChristianKl, AllAmericanBreakfast↑ comment by ChristianKl · 2020-05-29T13:49:42.563Z · LW(p) · GW(p)
Once one person anywhere in the world gets all the pieces just right, then it will be obviously good and quickly spread.
Unless the innvention destroys everything when it quickly spreads. When we want to prevent X-risk from unfriendly AI we do need some unity.
↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2020-05-29T19:41:14.964Z · LW(p) · GW(p)
Once one person anywhere in the world gets all the pieces just right, then it will be obviously good and quickly spread.
My post is inspired by Zack M Davis's comment that's linked at the top. The point there was that when beliefs diverge so much that we start discounting valid evidence because we mistrust the source, then the truth will not be obvious and will not spread.
Smartphones are a great case in point. Although it's obvious that they function mechanically as promised, it's not at all clear that they're a good thing for humanity. There's plenty of research about whether they're driving mental health issues, degrading our relationships, sucking up our time, and draining our income.
There is a truth of the matter, but some people find it hard to trust evidence one way or the other. They get suspicious of the evidence of the factions claiming that smartphones are or aren't doing these things; suspicious of their incentives and intuitions that we believe drive their erroneous conclusions; and aren't ready to believe even when the evidence is systematically weighted (GIGO).
The kind of unity I'm referring to here is widespread agreement on how to interpret the state of the evidence on any given proposition. It's what Kuhn would call "normal science." I think this is uncontroversially desirable. Unity on physics, chemistry, computer science, and so on is what produced the Smartphone.
But not only is the evidence in many other fields (and even within these fields, in some areas) incomplete, the experts are divided into long-standing, intractable factions that disagree about how to interpret it, won't change their minds, and aren't interested in trying.
The old cognitive bias frame suggests that we need to individually overcome our irrational biases in order to correct this problem.
The new trust-building frame suggests that instead, people are basically rational, but are failing to build sufficient trust between members of opposing factions to reconcile the evidence and produce a motion toward truth.
Replies from: donald-hobson↑ comment by Donald Hobson (donald-hobson) · 2020-05-30T11:34:05.889Z · LW(p) · GW(p)
I agree that this is a real phenomena that can happen in domains where verifying a correct answer is not much easier than coming up with one.
However, epistemic norms regarding what counts as valid evidence are culturally transmitted. People using occams razor will come to different conclusions from the people using divine revelation. (Similar to how mathmeticians using different axioms will come to different conclusions.)
Replies from: AllAmericanBreakfast↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2020-05-30T16:37:40.760Z · LW(p) · GW(p)
"I don't trust your evidence."
I suspect that the factions which can be reconciled are those that are operating in a shared epistemic paradigm. For example, two opposing scientific factions that agree on the basic principles of logic and evidence, but do not trust the research that the other side is producing - perhaps because they believe it's sloppily done or even fraudulent. To be concrete, the debate on whether the minimum wage does or does not harm economic growth has voluminous evidence on each side, produced by serious researchers, but it apparently hasn't caused either side to budge.
A second example is between religious believers. Two camps might accuse each other of a belief in God that isn't based on genuine revelation. For example, Christians discount the Quran, while Muslims discount the claim that Jesus was the son of God. But neither side questions that the other is genuinely motivated by faith.
"You're just pretending to care about our epistemic norms"
Less tractable is resolving a dispute between factions that agree on an epistemic paradigm in theory, but suspect that in practice, the other side is not actually using it. An example here is the disagreement between Western medicine and naturopathy. Naturopaths accuse Western doctors of pretending like their practice is more based on evidence than it really is (which is often a fair accusation). There are some naturopaths who claim that practices like homeopathy and acupuncture have an evidence base, and Western doctors accuse them not only of sloppy research, but of only using research as cover for a fundamentally non-evidence-based approach to medicine.
A religious example is between followers of the same religion who might accuse each other of faking faith to serve some worldly goal. This accusation gets leveled at prosperity gospel, fundamentalists, cults, and mainline religions.
"Our epistemic norms are incompatible."
More intractable still is a dispute between factions that openly disagree on epistemic norms. Your example is between believers in divine revelation and adherents of Occam's razor/logic/evidence.
comment by ChristianKl · 2020-05-29T10:08:25.182Z · LW(p) · GW(p)
The advice seems to me like it's advocating cultism too much. I have plenty of relationships that have nothing todo with the rationality community and I would hope the same is true for most members of our community.
Replies from: AllAmericanBreakfast↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2020-05-29T15:14:05.885Z · LW(p) · GW(p)
I think it's advocating against cultism, but if that's your takeaway, my bad for not making that clearer!
comment by Slider · 2020-05-29T20:16:02.664Z · LW(p) · GW(p)
I like that my mouth, ears and eyes produce different data in the same situation. An extreme of "unity" could be that every sense reports the same finding. Sure there are situations that the idiom "I can't believe my eyes" points to but even in that case it can be described as an (ab)use of language, it's the brains job to form thoughts and the eyes job to register the electromagnetic field.
One shouldn't delegate thinking to other persons and contextualising others contribution as input rather than output is a raod to be more sensitive to reality.