Crony Beliefs
post by plex (ete) · 2016-11-03T20:54:07.716Z · LW · GW · Legacy · 15 commentsThis is a link post for http://www.meltingasphalt.com/crony-beliefs/
Contents
15 comments
15 comments
Comments sorted by top scores.
comment by GMHowe · 2016-11-04T01:08:14.995Z · LW(p) · GW(p)
I really liked this post. I thought it was well written and thought provoking.
I do want to push back a bit on one thing though. You write:
What makes for a crony belief is how we're rewarded for it. And the problem with beliefs about climate change is that we have no way to act on them — by which I mean there are no actions we can take whose payoffs (for us as individuals) depend on whether our beliefs are true or false.
It is true that most of us probably won't take actions whose payoffs depend on beliefs about global warming, but it is not true that there are no such actions. One could simply make bets about the future global average temperature.
So the problem is not that there are no actions we can take whose payoffs depend on whether our beliefs are true or false. Rather beliefs about global warming are likely to be cronies because the subject has become highly political. And as you correctly point out, in politics social rewards completely dominate pragmatic rewards.
To illustrate, it is even harder to find actions we can take whose payoffs depend on the accuracy of the belief that the Great Red Spot is a persistent anticyclonic storm on the planet Jupiter. Does this mean that a belief in the Great Red Spot is even more likely to be cronyistic than a belief regarding global warming?
comment by Secret_Tunnel · 2016-11-04T23:15:29.109Z · LW(p) · GW(p)
What makes for a crony belief is how we're rewarded for it. And the problem with beliefs about climate change is that we have no way to act on them — by which I mean there are no actions we can take whose payoffs (for us as individuals) depend on whether our beliefs are true or false. The rare exception would be someone living near the Florida coast, say, who moves inland to avoid predicted floods. (Does such a person even exist?) Or maybe the owner of a hedge fund or insurance company who places bets on the future evolution of the climate. But for the rest of us, our incentives come entirely from other people, from the way they judge us for what we believe and say. And thus our beliefs about climate change are little more than hot air (pun intended).
For those who actually want to affect climate change with their actions, this kinda goes hand-in-hand with the idea that stating your goals before you start working on them sabotages you due to getting social credit without having to do anything. Maybe the most useful beliefs should be kept private?
Then again, it's important to spread correct ideas too, and that can't happen if we all stayed shut up about it. Talking about climate change might honestly be the most impactful thing a lot of people are capable of (practically) doing. As much as I like to get bitter about hashtag activism, I don't think we can ignore that it's probably had at least some positive effects, even if they're not optimal for as much effort as people put into yelling on the internet.
comment by NancyLebovitz · 2016-11-04T21:04:08.370Z · LW(p) · GW(p)
This suggests that being hypocrtical about crony beliefs is actually healthy.
Robert, the mayor's nephew, has a bulletproof job as a business analyst. If he's incompetent, his advice is ignored. He isn't given a job with actual responsibilities either.
This is like a person in a pro-astrology environment who knows enough about astrology to take part in conversations, never criticizes astrology, and doesn't let astrology affect their decisions.
(reposted from facebook)
comment by entirelyuseless · 2016-11-05T03:26:13.727Z · LW(p) · GW(p)
The problem with trying to find other people who will judge you based on the accuracy of your beliefs, is that they cannot do that. They will just judge you based on the nearness of your beliefs to theirs, since they think that their beliefs are true. So you will simply start being motivated to accept the beliefs of that community, true or not.
The better way is to abandon every community, and consequently you will no longer be significantly influenced by such things.
Replies from: Dagon↑ comment by Dagon · 2016-11-05T16:25:20.032Z · LW(p) · GW(p)
There are lots of dimensions to beliefs and values. Many communities accept a wide range of variance on some dimensions, as long as you mostly conform to others.
Rather than abandoning every community, try a bunch of them until you find one where you're relatively confident in your beliefs that match the things they use for inclusion testing.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2016-11-05T17:05:22.445Z · LW(p) · GW(p)
I am already sure that all communities include as core beliefs or very close to core, things that I am very confident are false.
I learned that from experience, but it is easy to come up in hindsight with theoretical reasons why that would be likely to be the case.
Replies from: tdb, Lightwave, Dagon, CronoDAS↑ comment by Lightwave · 2016-11-20T11:21:58.509Z · LW(p) · GW(p)
things that I am very confident are false
Could you give any example?
Replies from: entirelyuseless↑ comment by entirelyuseless · 2016-11-21T16:09:29.412Z · LW(p) · GW(p)
Of things that I am very confident are false which are believed by communities? Basically things like "the other communities have very little grasp on reality," when in fact they all share a large core of things in common. But when the other community describes that core in different words, they say that the words are meaningless or ignorant or false, even though in fact they are all talking about the same thing and are roughly in agreement about it.
For example, when Eliezer talks about "how an algorithm feels from the inside," he is basically talking about the same thing that Thomas Nagel is talking about when he talks about things like "what it is like to be a bat." But I suspect that Eliezer would be offended by the comparison, despite its accuracy.
Likewise, Eliezer's identification of AIs with their program is basically the same kind of thing as identifying a human being with an immaterial soul -- both are wrong, and in basically the same way and for the same reasons, but there is something right that both are getting at. Again, I am quite sure Eliezer would feel offended by this comparison, despite its accuracy.
The same thing is true of TDT -- it is basically in agreement with a form of virtue theory or deontological ethics. But since Eliezer came to his conclusion via utilitarianism, he thinks he is right and the others are wrong. In reality they are both right, but the other people were right first.
Of course this happens a bit differently with communities than it does with individuals and individual claims. I used individuals in these examples because the situation is clearer there, but there is an analagous situation with communities. This might be a selective effect -- a community preserves its own existence by emphasizing its difference with others. Consider how diverse languages develops. Naturally there would just be a continuum of languages, with the people in the middle speaking something intermediate between the people on the two ends. But larger breaks happen because people say, "we don't talk like those fellows on the other side of the fence." In the same way communities preserve their existence by emphasizing how bad the other communities are.
The fact that I do not want to do this means that I cannot fit well into any community.
↑ comment by Dagon · 2016-11-07T14:54:04.578Z · LW(p) · GW(p)
Odd. I've found a number (typically smaller ones) where there are some wrong beliefs floating around, but which the core membership criteria fit me well.
If literally all groups hold core beliefs that you are confident are wrong, perhaps re-examine your confidence level. If all public groups in a region hold wrong beliefs, expand your view of communities to include other regions and smaller, private groups.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2016-11-07T15:13:41.732Z · LW(p) · GW(p)
"re-examine your confidence level"
I try to do that. Being part of a community would impede that process for at least some beliefs.
You are probably right about the smaller groups, but there are high search costs, especially since I am an introvert. And for a similar reason it does not bother me much to live alone and without any community.
comment by WhySpace_duplicate0.9261692129075527 · 2016-11-05T03:01:19.751Z · LW(p) · GW(p)
TL;DR: Instrumental rationality leads us all to at least a few false beliefs, via rational irrationality. (That is, it is instrumentally rational to believe lies if there are social rewards.) In this case, reading the Sequences is only treating the symptoms, since our biases all stem from bad incentives. The most promising solution is to build communities which actively celebrate epistemic rationality, since that aligns social incentives with accurate beliefs and methods of acquiring them.
I highly recommend this. I've read the sequences and thought a lot about rationalization and the like, but somehow I never made the full connection between that and instrumental rationality.
My only real complaint is with word choice, rather than substance. It's difficult to think of one's self as a crony, so coining "crony beliefs" may have been a suboptimal way of helping us recognize certain beliefs as crony beliefs, let alone call them that openly. Maybe we can fight this by making it a community norm to use the term for any belief which we benefit from holding. It would also have been nice to see rational irrationality name-dropped, although the author did a much better job than the sequences at leaving breadcrumbs to investigate the sources of these ideas.
comment by jeremygordon · 2017-04-11T09:46:55.435Z · LW(p) · GW(p)
I only just discovered this article, and found it extremely useful.
My issue with it, however, is that the author argues that external social influence is the primary source of crony-beliefs. "...social incentives are the root of all our biggest thinking errors."
I think this glosses over an entire class of beliefs that would fall neatly into the crony definition, which need not have anything to do with the opinions of others: self-deception, denial, etc. Surely we maintain a large number of views that are less meritocratic, less good at modeling the world, simply because they make us comfortable, or allow us to ignore aspects of the world that would otherwise hold us back.
Said another way, I believe our minds are capable of cronyism even in a completely non-social world, to filter unpleasant likelihoods. Examples might include: to keep stress at bay during risky activities (crony belief: 'it's not that risky'), to stay motivated in the face of repeated failure (crony belief: 'if i try harder next time i'll definitely succeed'), etc.
As such, this is extremely hard for me to agree with: "Suppose we weren't Homo sapiens but Solo sapiens, a hypothetical species as intelligent as we are today, but with no social life whatsoever... In that case, it's my claim that our minds would be clean, efficient information-processing machines — straightforward meritocracies trying their best to make sense of the world."