How to calibrate your political beliefs
post by Michael Wiebe (Macaulay) · 2013-05-12T20:09:21.466Z · LW · GW · Legacy · 75 commentsContents
75 comments
So you're playing the credence game, and you’re getting a pretty good sense of which level of confidence to assign to your beliefs. Later, when you’re discussing politics, you wonder how you can calibrate your political beliefs as well (beliefs of the form "policy X will result in outcome Y"). Here there's no easy way to assess whether a belief is true or false, in contrast to the trivia questions in the credence game. Moreover, it’s very easy to become mindkilled by politics. What do you do?
In the credence game, you get direct feedback that allows you to learn about your internal proxies for credence, i.e., emotional and heuristic cues about how much to trust yourself. With political beliefs, however, there is no such feedback. One workaround would be to assign high confidence only to beliefs for which you have read n academic papers on the subject. For example, only assign 90% confidence if you've read ten academic papers.
To account for mindkilling, use a second criterion: assign high confidence only to beliefs for which you are ideologically Turing-capable (i.e., able to pass an ideological Turing test). As a proxy for an actual ideological Turing test, you should be able to accurately restate your opponent’s position, or be able to state the strongest counterargument to your position.
In sum, to calibrate your political beliefs, only assign high confidence to beliefs which satisfy extremely demanding epistemic standards.
75 comments
Comments sorted by top scores.
comment by Qiaochu_Yuan · 2013-05-12T20:29:24.075Z · LW(p) · GW(p)
I suggest the alternative strategy of not having political beliefs at all in the name of combating privileging the question. Once you're in a position to actually influence policy, then maybe it makes sense to have opinions about policy.
Replies from: fubarobfusco, Tyrrell_McAllister, pragmatist, Kawoomba, Yosarian2, ThrustVectoring, Macaulay, BerryPick6↑ comment by fubarobfusco · 2013-05-12T22:04:17.781Z · LW(p) · GW(p)
Once you're in a position to actually influence policy, then maybe it makes sense to have opinions about policy.
How does anyone manage to acquire a position to actually influence policy? From what I can tell, people of my acquaintance who have done this, have begun with some opinions about policy ... and, indeed, have sought positions that accord with their preëxisting policy opinions.
Replies from: Qiaochu_Yuan, DanArmak↑ comment by Qiaochu_Yuan · 2013-05-13T01:26:48.647Z · LW(p) · GW(p)
I think if you'd like to see some changes in the world you should think of your opinions about those changes as opinions about changes you'd like to see in the world, then see if political action is a tool that can help you accomplish those changes. Putting those opinions in the politics bucket seems like tempting fate as far as inviting being mindkilled about them.
↑ comment by DanArmak · 2013-05-13T19:29:44.307Z · LW(p) · GW(p)
Some people want to be politicians. They do best by joining an existing party or movement and adopting all their political opinions except for maybe one or two issues they personally care about.
Other people want to affect policy on a certain issue, and they decide to do so via politics. But once they enter politics, to get things done and based on the personal contacts they develop, I think most of them (not all) tend to affiliate with a party and again adopt their other opinions.
And yet other people (the majority, I think) try to affect policy without becoming politicians. Most changes in effective policy happen because a new product becomes available on the market, because someone expands or curtails a service, because someone changes prices by R&D or by contributing money to an existing concern. And these people can remain free of politics if they want to.
↑ comment by Tyrrell_McAllister · 2013-05-13T23:21:22.063Z · LW(p) · GW(p)
Once you're in a position to actually influence policy, then maybe it makes sense to have opinions about policy.
TDT/UDT-type reasoning makes this a little less straightforward. The question isn't, "Am I personally in a position to influence policy?". The question is more like, "Consider the collection of people whose opinions on policy are logically correlated with my own. Are these people, in the aggregate, in a position to influence policy?".
↑ comment by pragmatist · 2013-05-13T13:42:07.174Z · LW(p) · GW(p)
Is this general advice that you would also give to someone who, say, expressed an interest in learning about quantum gravity, or do you think there something especially harmful about political beliefs?
Also, it seems there should be ways to combat privileging the question other than abstaining from political belief entirely. That seems like a rather radical (and somewhat defeatist) solution.
Replies from: satt↑ comment by satt · 2013-05-15T01:45:56.088Z · LW(p) · GW(p)
Is this general advice that you would also give to someone who, say, expressed an interest in learning about quantum gravity, or do you think there something especially harmful about political beliefs?
I wouldn't be very surprised if political beliefs were especially distracting, but non-political (well, non-overtly-political) beliefs & topics can also be good at consuming attention. Bonfires of attention I've seen blazing away on LW include the Sleeping Beauty problem, how to interpret QM, and how to interpret probabilities. (Some political beliefs at least have the merit of being testable!)
Also, it seems there should be ways to combat privileging the question other than abstaining from political belief entirely. That seems like a rather radical (and somewhat defeatist) solution.
And one easier said than done. Whenever someone tells me they're apolitical or refuse to hold political views, I brace myself for the inevitable accidental unveiling of their non-existent politics. (This doesn't just apply to people I have extended personal interactions with. Look at Philip Larkin!)
↑ comment by Kawoomba · 2013-05-12T20:35:02.352Z · LW(p) · GW(p)
Having a sufficiently strong opinion about policy may influence your decision whether you try to get into a position to influence policy in the first place. There is more than one "power" hierarchy, you cannot just get more powerful in general and then equally influence any sort of policy. Sometimes you'll need to decide in advance.
Similar to a high school student discovering FAI and sufficiently caring to plan to get into a position to influence FAI research most efficiently.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-05-12T20:45:30.957Z · LW(p) · GW(p)
I agree that this is a reason one might decide to have political opinions but disagree that this is in fact why people have political opinions. I think not having political opinions is a good default strategy and if at some point you find a good reason to deviate from this strategy then so be it, but I think many people default to having political opinions instead.
↑ comment by Yosarian2 · 2013-05-15T02:21:35.860Z · LW(p) · GW(p)
Once you're in a position to actually influence policy, then maybe it makes sense to have opinions about policy.
I'm not sure that makes sense in a representative democracy.
Generally speaking, things tend to change when a significant majority of people first form a specific opinion about an issue, care about that issue a great deal, and communicate that in such a way as to both influence other voters and to influence the politicians. In that sense, everyone is in a position to influence policy in some small way.
I understand that from an economic viewpoint, it might not make sense to expend energy to form a proper opinion if you're going to have minimal influence over it, but if that attitude became widespread, would our form of government continue to function at all?
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-05-15T06:11:01.874Z · LW(p) · GW(p)
The TDT-style reasoning is not "would this be bad if everyone did it?" but "would this be bad if everyone sufficiently similar to me did it?" and I think there it's much less clear. If everyone similar to me spent less time on politics and more time on more useful things I don't think that's obviously a net loss at all.
Also, again, I think influencing the majority to influence politicians is a reason to have political beliefs but that most people don't have political beliefs for this reason. A strategy for having political beliefs optimized for actually changing the opinions of large numbers of people would look very different from the way most people approach politics.
Replies from: Tyrrell_McAllister, Yosarian2↑ comment by Tyrrell_McAllister · 2013-05-15T19:05:17.904Z · LW(p) · GW(p)
The TDT-style reasoning is not "would this be bad if everyone did it?" but "would this be bad if everyone sufficiently similar to me did it?" and I think there it's much less clear. If everyone similar to me spent less time on politics and more time on more useful things I don't think that's obviously a net loss at all.
To be slightly more precise, the TDT-style reasoning is, "Would this be bad if everyone who decided using a decision procedure logically correlated with mine did it?".
Now, it might be that your decision procedure is so "logically isolated" that the cohort of people whose decisions are correlated with yours is too small to be politically significant.
But it seems to me that most people arrive at their political opinions using one of a small set of correlated classes of decision procedures. It follows from the pigeon-hole principle that at least one of these correlated classes contains a lot of people. (The fact that you can write above about how "most people approach politics" also points towards this conclusion.) There is a real chance that this class is large enough to be politically significant.
The upshot is that your reasoning, while it might apply to you, would not apply to people who decide these issues in more typical ways, because these people are numerous enough that their political opinions have real influence.
Which is not to say that these people need to be spending more time on politics. But it does suggest that their getting their politics right matters.
↑ comment by Yosarian2 · 2013-05-15T09:10:13.782Z · LW(p) · GW(p)
I do think you might be underestimating the utility of being involved in politics.
It's common to think, say, "There are 300 million people in this country, so if I only have an amount of democratic influence equal to 1/300,000,000 of the country, then it's not worth being involved." (Speaking in terms of the US). However, the total amount of utility at stake here is huge; the government spends about 3 trillion dollars a year. If you think that, hypothetically speaking, that party A would spend that money in a way that's about 10% more efficient at creating positive utility then party B, then that's about 300 billion dollars worth of positive utility at stake here. If you have a 1/300 million influence over a decision worth 300 billion dollars of utility, then when you multiply that out your influence is worth about 1000 dollars of utility. So one would think that it would be worth a fair amount of time to try to do something useful with it.
In practice, it may be lower if you don't think that there is that much difference between the two parties, while on the other hand a person willing to spend a little time talking about politics and sending letters to their congressmen and otherwise getting involved probably has a higher degree of influence then that (especially since almost half the country doesn't vote at all). And there are also other things the government does that aren't directly related to money that can also have a significant impact.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-05-15T20:35:19.908Z · LW(p) · GW(p)
If you're trying to compute an expected utility, I don't think this is the right way to compute the probability. The probability to compute is the probability that your vote will decide the election, which I expect to be very small where I live (California) but might be large enough in swing states to make it worthwhile to vote there. Someone posted a nice paper computing this probability for several states based on Nate Silver's election data that I can't find. (A very important point here is that the amount of political leverage you have is not linear in the number of people whose votes you affect: the change in the probability of deciding the election is highly nonlinear as you get closer to the boundary where the outcome changes.)
Nevertheless, I still think that a strategy optimized for actually making a difference in politics looks very different from the strategy most people adopt.
Replies from: Vladimir_Nesov, Yosarian2↑ comment by Vladimir_Nesov · 2013-05-15T21:50:45.930Z · LW(p) · GW(p)
The probability to compute is the probability that your vote will decide the election
I think this should be the probability of your decision procedure deciding the election. If many people are using similar decision procedures, the decision they follow commands their combined votes, and correspondingly the probability that the decision matters goes up when there are more similarly-deciding people. From this point of view, the voters in an election are not individuals, but decision procedures, each of which has a certain number of votes, and each decision procedure can decide whether it's useful to cast its many votes. A decision procedure that is followed by many people, but thinks that it only commands one vote is mistaken on this point of fact, and so will make suboptimal decisions.
Replies from: Yosarian2↑ comment by Yosarian2 · 2013-05-16T00:42:07.000Z · LW(p) · GW(p)
Good point.
A good example of this is how certain very simplistic decision procedures, like "single issue voters", can have influence far above and beyond their numbers. If 5% of the population will always vote based on one specific issue, and that is both known and understood by politicians, then even if they are in the minority, they have a major amount of influence over that one issue, because that decision procedure is so significant. Examples: gun lobbies, labor groups, abortion, ect.
↑ comment by Yosarian2 · 2013-05-15T21:53:47.483Z · LW(p) · GW(p)
I don't think your political influence is primarily dependent on the probability that your one, single vote will decide the election. It's a much more nebulous thing that that in reality; it's how you show up in polls, how politicians think their positions may affect your vote, how likely your demographic is to vote in the first place and how that affects politician's priorities, how well you are able to articulate your position, how many resources a political party feels they need to devote to your district instead of some other district, ect. For example, even if it didn't change any elections at all, I think that we would be funding college in a very different way if a higher percentage of college students voted.
Nevertheless, I still think that a strategy optimized for actually making a difference in politics looks very different from the strategy most people adopt.
Probably true. The strategy most people adopt looks more to me like "let me try to spend the minimal amount of time necessary in order to get enough information to let me feel confident in deciding who to vote for". Much less payoff then a deliberate strategy to maximize making a difference, but much less cost involved as well. There's also a general attitude among a lot of people that, at a rule, doing at least that is your responsibility as a citizen, and I think that is probably correct.
↑ comment by ThrustVectoring · 2013-05-12T23:20:36.756Z · LW(p) · GW(p)
Conversely, if you're not in a position to actually influence policy, you're better off optimizing your political statements and beliefs by their social usefulness. Embracing the mindkill, in other words. If making fun of George W. Bush helps you be more popular in San Francisco or NYC, or making fun of Obama makes you more popular in %small_town, then why not?
Replies from: gjm, Vladimir_Nesov, Yosarian2↑ comment by gjm · 2013-05-13T00:37:32.939Z · LW(p) · GW(p)
why not?
It may encourage sloppy thinking habits that make you less effective in other ways. It may make you popular with people you'd probably have got on well with anyway, while losing some (maybe more valuable) opportunities to interact positively with a more diverse set of people. It may risk having your insincerity noticed by exceptionally insightful people, who might have been good friends or useful contacts. It may lead you to behave in ways that harm you or the world in an attempt to signal your political affiliation.
↑ comment by Vladimir_Nesov · 2013-05-13T00:20:46.382Z · LW(p) · GW(p)
optimizing your political statements and beliefs by their social usefulness
Ideas constructed in this manner are not "beliefs" in the sense that they are not evidence about the world that is useful for navigating it. It's deception/self-deception to pass such ideas for beliefs, and it might be hard for them to turn into actual anticipation-controlling beliefs, so perhaps it's somewhat misleading to call them "beliefs".
↑ comment by Yosarian2 · 2013-05-15T02:25:20.673Z · LW(p) · GW(p)
then why not?
Well, as psychological studies have shown, if there is groupthink around a specific issue, having one person be seen to visibly disagree with the group can make it easier for other people to disagree, which improves the whole group's decision making process.
If you actually think that politician X is doing a decent job, and you are willing to say so in an environment where others may disagree with you, then that frees other people to think and act in a more independent way, improving the whole group's ability to make rational decisions about politics.
↑ comment by Michael Wiebe (Macaulay) · 2013-05-12T20:58:53.315Z · LW(p) · GW(p)
Both strategies might end up producing the same outcome. Define a "Certified Political Belief" as a belief which satisfies the above standards. In my own case, I don't actually have any strong political beliefs (>90% confidence) which I would claim are Certified (except maybe "liberal democracy is a good thing").
In fact, a good exercise would be to take your strongest political beliefs, actually write down which academic articles you've read that support your position, and then go do a quick test (with a third-party referee) to see whether you're ideologically Turing-capable. This sounds like a good way to get feedback to help you calibrate.
↑ comment by BerryPick6 · 2013-05-12T20:34:42.834Z · LW(p) · GW(p)
It does make for boring company in certain circumstances, though, and having well-thought out political positions is high-status, despite the mindkilling involved.
Although, I suppose, if you didn't live in a community that engaged in political opinion status games, this would be the way to go.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-05-12T20:42:49.037Z · LW(p) · GW(p)
If you only have political opinions for the status benefits, then why would you need to calibrate them?
Replies from: BerryPick6, Nominull↑ comment by BerryPick6 · 2013-05-12T20:44:22.961Z · LW(p) · GW(p)
Good point.
↑ comment by Nominull · 2013-05-15T01:02:24.658Z · LW(p) · GW(p)
If you run in social circles where having well-calibrated beliefs is high-status, not gonna name any names.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-05-15T06:09:10.044Z · LW(p) · GW(p)
But it's easier to have well-calibrated beliefs about things that aren't politics. Also more useful (e.g. if those things are how to run a startup properly, or how to exercise properly, or...). Most people aren't in a position to test most political beliefs.
comment by NoSignalNoNoise (AspiringRationalist) · 2013-05-13T00:35:09.059Z · LW(p) · GW(p)
One workaround would be to assign high confidence only to beliefs for which you have read n academic papers on the subject.
This is dangerous, because people tend to use additional information primarily to support their existing opinion more than to improve it. See Motivated Skepticism in the Evaluation of Political Beliefs.
Replies from: Macaulay↑ comment by Michael Wiebe (Macaulay) · 2013-05-13T04:51:32.741Z · LW(p) · GW(p)
Good point. Do you think the ideological Turing-capability requirement helps to mitigate this danger, and if so, how much does it help?
comment by buybuydandavis · 2013-05-12T21:27:02.527Z · LW(p) · GW(p)
One workaround would be to assign high confidence only to beliefs for which you have read n academic papers on the subject.
Academic papers are what get's published, not what's true. The difference is particularly pronounced for political topics.
you should be able to accurately restate your opponent’s position
There are limits. You can't accurately restate gibberish. You can mimic it as in a Turing test, but I don't see any criteria for accuracy.
I think the best you can do is identify the unstated assumptions. When you can get both sides to say "If A,B,C the side1 is right, and if A~,B~,C~ then side2 is right, and if neither, then side1 and side2 each only has a piece of the truth."
Replies from: fubarobfusco, Macaulay↑ comment by fubarobfusco · 2013-05-12T21:58:37.850Z · LW(p) · GW(p)
You can't accurately restate gibberish.
Good point. If someone appears to be emitting gibberish on a subject, but seems to be a reasonably functional member of society (i.e. is probably not floridly psychotic), and there's nothing about the structure of the subject that seems to license gibberish (e.g. a subject where dreams or psychedelic visions are treated as unquoted evidence) this may indicate that you simply don't understand that subject and should learn more before attempting to judge their statements.
For instance, I would expect that a person who had no higher math would consider correct statements in category theory to be indistinguishable from gibberish.
Replies from: Yosarian2, buybuydandavis↑ comment by Yosarian2 · 2013-05-15T02:28:07.720Z · LW(p) · GW(p)
this may indicate that you simply don't understand that subject and should learn more before attempting to judge their statements.
It may, but not necessarily. Especially on issues of politics and religion, otherwise rational people may repeat gibberish if they think the alternative is to "let their side down" and "let the other side win".
↑ comment by buybuydandavis · 2013-05-13T04:14:55.809Z · LW(p) · GW(p)
To take some liberties with Stirner:
Do not think that I am jesting or speaking figuratively when I regard the almost the whole world of men, as veritable fools, fools in a madhouse, who only seem to go about free because the madhouse in which they walk takes in so broad a space.
People have all sorts of crazy nonsense in their heads, particularly with respect to morality and politics, as Stirner pointed out. If you want to conclude that I just don't understand their perfectly sensible views, feel free. If you want to conclude that you just don't understand them when they seem to be talking nonsense to you, knock yourself out. I have a lifetime of experience to the contrary, and am not persuaded by your say so.
and there's nothing about the structure of the subject that seems to license gibberish
The subject doesn't license gibberish, though people often take such license in politics, morality, religion, etc. Nowhere is there more nonsense, and political belief is the subject matter under discussion.
Replies from: Multiheaded↑ comment by Multiheaded · 2013-05-13T12:00:36.461Z · LW(p) · GW(p)
If you want to conclude that I just don't understand their perfectly sensible views, feel free. If you want to conclude that you just don't understand them when they seem to be talking nonsense to you, knock yourself out. I have a lifetime of experience to the contrary, and am not persuaded by your say so.
Ok... so, as long as we're in a community developed specifically for such things - tell me, what kind of evidence would it take to change your mind about Those Evil Meddling People, and in what regard?
Replies from: buybuydandavis↑ comment by buybuydandavis · 2013-05-13T18:37:22.493Z · LW(p) · GW(p)
The point on crazy nonsense applies to meddlers and non meddlers alike.
What would it take for me to change my views with respect to how much crazy I estimate in other people's heads? It would help, if upon examination, they could routinely provide sensible explanations for what seems to me nonsense.
I have confidence that I could take the ideological Turing test and pass myself off as being quite sensible to most of them. Few of them could do it to me.
Replies from: Multiheaded↑ comment by Multiheaded · 2013-05-13T21:35:55.284Z · LW(p) · GW(p)
I have confidence that I could take the ideological Turing test and pass myself off as being quite sensible to most of them.
Well, honestly, I doubt it - take e.g. your methodological individualism. From what I've read of it, (a few blog posts by Austrian economists) it basically appears as a crazy nonsensical fairytale, to be invoked as ideological justification for a "libertarian" narrative of society.
I claim that the historical dynamics of actually existing societies can't be usefully explained by it - that a massive amount of historical... stuff is deterministic, intersubjective and not easily pinpointed but far from "abstract"/"ghostly", and shapes individual wills first, even when it is in turn shaped by them. So, could you make a strong and unequivocal argument against methodological individualism, from whatever position?
Replies from: fubarobfusco, buybuydandavis↑ comment by fubarobfusco · 2013-05-14T00:29:47.653Z · LW(p) · GW(p)
shapes individual wills first, even when it is in turn shaped by them
This seems not only intuitively obvious, but a prerequisite for (e.g.) advertising and propaganda techniques actually working well enough for anyone to bother spending much money on them.
↑ comment by buybuydandavis · 2013-05-13T22:13:08.563Z · LW(p) · GW(p)
Methodological individualism doesn't preclude someone from passing an ideological Turing test for someone who doesn't use it, just as sanity doesn't prevent someone from pretending to be insane.
Replies from: Multiheaded↑ comment by Multiheaded · 2013-05-13T22:36:12.919Z · LW(p) · GW(p)
Ok, go ahead, then. Hit me with your best shot. If you give me a halfway serious effort, I promise I'll return the favour with a defense of MI. (From what I've heard, Popper is the best known non-Austrian champion of MI; need to read up on him.)
Replies from: buybuydandavis↑ comment by buybuydandavis · 2013-05-13T23:38:50.450Z · LW(p) · GW(p)
"It's not fair! You're so hateful! The government is us. It takes a village. It depends on what the meaning of 'is' is. What difference at this point does it make?"
How'd I do?
But I don't think this is the way the Turing test is supposed to work. I don't just pontificate, you're supposed to be an interrogator, and there's supposed to be another blinded participant who is an average run of the mill American liberal. I think we lack the facilities, but it could be good fun.
Replies from: Prismattic↑ comment by Prismattic · 2013-05-14T00:05:03.837Z · LW(p) · GW(p)
Multiheaded isn't American, so why would you want the judge to be?
Replies from: buybuydandavis↑ comment by buybuydandavis · 2013-05-14T00:45:52.050Z · LW(p) · GW(p)
Multiheaded would be the judge, and the US liberal would be the comparison who would have to be more convincingly liberal than me by Multiheaded's estimation.
Why American? Because I'm American, and I wasn't claiming to be able to impersonate every crazy on the globe.
As for why liberal, it's a crazy I'm familiar with a large population here, and he said "Evil Meddling People", so the shoe fit.
I don't claim that I can impersonate every crazy in the world, only ones I'm familiar with.
Though Multiheaded really shouldn't be the judge. It should be another run of the mill American liberal. The relevant Turing test is whether I can pass myself off as one of the tribe.
↑ comment by Michael Wiebe (Macaulay) · 2013-05-12T21:50:48.815Z · LW(p) · GW(p)
Academic papers are what get's published, not what's true. The difference is particularly pronounced for political topics.
Right, it's a necessary condition, not a sufficient one.
Replies from: 9eB1↑ comment by 9eB1 · 2013-05-13T16:21:47.778Z · LW(p) · GW(p)
It's not a necessary condition. Academic papers are regularly mistaken either due to methodological limitations, bad statistical methodology, publication bias, stretching of conclusions and a host of other factors. The fact that information has been published is evidence that it contains true information, but is not even necessarily strong evidence. Keep in mind that in the field of cancer research, which is on much firmer quantitative footing than political science "research," researchers were unable to replicate 89% of studies.
Replies from: Macaulay↑ comment by Michael Wiebe (Macaulay) · 2013-05-13T17:00:08.758Z · LW(p) · GW(p)
Do you think that the standards given in the OP are too demanding? Not demanding enough?
Replies from: 9eB1↑ comment by 9eB1 · 2013-05-13T19:07:30.707Z · LW(p) · GW(p)
One workaround would be to assign high confidence only to beliefs for which you have read n academic papers on the subject. For example, only assign 90% confidence if you've read ten academic papers.
If you have read 10 papers on a political question, all the papers are concurring, and they represent the entire body of literature on that question, then 90% can be warranted. I strongly suspect that if you do a fair review of any question there will be a tremendous amount of disagreement, however, and a well-calibrated observer would rarely approach 90% without strong pre-existing ideological biases subverting their estimation processes.
I have read quite a few macroeconomics papers, and there are very few non-trival things that I would assign a 90% probability to, and almost by definition those aren't the sort of things people talk about in "political discussions." The more you read the more humble you become with respect to our level of knowledge in the social sciences, especially if you are literate in the hard sciences. If you expect your knowledge to cash out in terms of predictions about the world, we don't know much. This is why if you ask an economics professor almost any question of substance, they will provide you with a lengthy docket of caveats.
Replies from: Macaulay↑ comment by Michael Wiebe (Macaulay) · 2013-05-13T22:13:19.723Z · LW(p) · GW(p)
I agree with all of this.
comment by pragmatist · 2013-05-13T14:06:31.591Z · LW(p) · GW(p)
I don't think that being able to state the strongest counterargument to your position is a good proxy for the ideological Turing test (or vice versa). Most people with strong political views tend not to have very strong arguments for those views, precisely because they don't carefully consider counterarguments from the opposing side. So if I were the judge in an ideological Turing test, and the test subject made a sophisticated and nuanced argument for his position, one that is constructed as a strong refutation of the opposing side rather than just as a signal of comradeship to people on the same side, I would be suspicious.
The best way to pass an ideological Turing test is to understand the typical arguments made by the opposing side, and that is very different from understanding the strongest arguments made by the opposing side. So I don't think passing an ideological Turing test is an adequate indicator of the sort of epistemic standard you're trying to get at here. It might work if the attempt is to discriminate between partisans who are explicitly known for being intellectually sophisticated (say, Caplan vs. Krugman), but that is a much more specific sort of Turing test.
Replies from: RomeoStevens↑ comment by RomeoStevens · 2013-05-14T03:29:43.074Z · LW(p) · GW(p)
I would agree with you if the purity of the test were the goal, but my understanding is that the ideological turing test is a means to an end, that end being both people working together to steelman both sides of the debate. Once it has been established that each person fully understands the strongest version of both sides, a true debate can begin.
comment by elharo · 2013-05-12T23:07:17.589Z · LW(p) · GW(p)
It's good that you classify this as for "beliefs of the form 'policy X will result in outcome Y')". That may be answerable. Much political discussion and dispute is more about relative preferences and goals than matters of fact. For example, gay marriage. Few people honestly dispute any matter of fact in the gay marriage debate, nor would many people's minds be changed by learning they were wrong on a matter of fact. It's an argument about values and preferences.
Replies from: kalium↑ comment by kalium · 2013-05-13T17:08:26.687Z · LW(p) · GW(p)
I've seen a number of disputes of fact on this matter. It's likely that these aren't usually true rejections of gay marriage, but I doubt all the people who express these concerns are straight-out lying. This article seems sincere for example.
People agree that gay marriage will increase the number of same-sex couples raising children. Is having a pair of opposite-sex caretakers crucial for a child's well-being?
Will allowing gay marriage increase the prevalence of open marriages?
Will allowing gay marriage increase STD prevalence?
- Will allowing gay marriage decrease the rate of marriage overall?
↑ comment by TheOtherDave · 2013-05-13T17:17:17.972Z · LW(p) · GW(p)
I agree that this probably isn't straight-out lying, in that someone who asks (e.g.) "Will allowing gay marriage decrease the rate of marriage overall?" probably isn't thinking "Of course allowing gay marriage isn't likely to decrease the rate of marriage overall... honestly, how many people will refuse to get married simply because a gay couple down the block is getting married? But if I ask the question, people will start to think that it will, because people are easy to manipulate that way."
Replies from: Eugine_Nier, kalium↑ comment by Eugine_Nier · 2013-05-14T05:16:36.468Z · LW(p) · GW(p)
Of course allowing gay marriage isn't likely to decrease the rate of marriage overall... honestly, how many people will refuse to get married simply because a gay couple down the block is getting married?
Yes, when you only consider the most straight forward causal path, of course, it seems absurd. (Incidentally, the problem with attempting to think of politics in utilitarian terms is that it makes it easy to make these kind of mistakes.) The causal path is about undermining the Schelling points on which marriage is based.
The reason the institution of marriage developed in the first place is that it's a effective institution for raising children. Gay marriage is a part of a modern tendency to neglect this aspect of marriage.
Replies from: Desrtopa, TheOtherDave↑ comment by Desrtopa · 2013-05-14T05:22:37.025Z · LW(p) · GW(p)
The institution of monogamous marriage appears to be a corruption of the older, more reproductively productive institution of polygyny by successful males. Why favor the middle-aged institution over the ancient?
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-05-14T05:31:36.124Z · LW(p) · GW(p)
The institution of monogamous marriage appears to be a corruption of the older, more reproductively productive institution of polygyny by successful males.
This type of polygyny wasn't nearly as widespread as you seem to think. In any case monogamous marriage has been around long enough that we can see that it works. The same isn't at all clear for what modern marriage is turning into.
Replies from: Desrtopa↑ comment by Desrtopa · 2013-05-14T05:55:32.485Z · LW(p) · GW(p)
I'm talking prehistoric here; we have evidence to suggest that raiding for females was a regular feature of human culture for most of our species' existence (discussed in this book.)
In any case, the modern, marry-for-love tradition, as opposed to marriage for political or economic expediency, is recent enough as to be practically untested in historical terms. It would be disingenuous to pretend that reverting to the institution of eighty years ago is in any way a return to a time-tested standard.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-05-15T02:59:28.472Z · LW(p) · GW(p)
I'm talking prehistoric here; we have evidence to suggest that raiding for females was a regular feature of human culture for most of our species' existence
This was that big an effect in practice, i.e., the average woman was not taken as a captive during her lifetime. War is negative sum and for all their glorification of war, even aggressive societies spent most of their energies on productive activities.
In any case, the modern, marry-for-love tradition, as opposed to marriage for political or economic expediency,
The two aren't mutually exclusive.
↑ comment by TheOtherDave · 2013-05-14T14:17:27.477Z · LW(p) · GW(p)
What would I expect to see if I thought about the issue properly in terms of the Schelling points on which marriage is based?
For example, would I expect to see a detectable decrease in the rate at which people get married overall in jurisdictions that legalize same-sex marriage relative to jurisdictions that don't? Would I expect to see a detectable increase in the rate of out-of-wedlock childbirths in these jurisdictions?
Would I expect to see a more general decrease across jurisdictions, with the rate of decrease proportional to the number of such jurisdictions at any given time?
Something else?
comment by buybuydandavis · 2013-05-12T21:28:57.688Z · LW(p) · GW(p)
A friend played ideological turing test on the web in some newsgroup, eventually becoming an ideological leader for his pretend side. Probably a good exercise.
Replies from: Viliam_Bur, cataphract93↑ comment by Viliam_Bur · 2013-05-14T08:30:12.399Z · LW(p) · GW(p)
I think that a person pretending to have a view X -- if they are able to do it well -- has a greater chance to become a leader than a person who really has the view X. The pretending person is more free to optimize for signalling. And the internet reduces some costs usually coming with real-life signalling, but our instincts are not calibrated to discount that.
If you learn to play the ideological turing test well, you know the "correct" (best for signalling) answers, and you can always use them. A real believer would at some moment diverge from the optimum path. Also a real believer would hesitate to lie to their own "tribe", while for a pretender everything is just a game. So at some critical moment the real believer will show their weakness, and the pretender can chide them for their lack of faith, getting higher status within the group. (Even if the smartest members of the group can see through this move and become suspicious, there will be a huge peer pressure on them.)
In real life people could be judged by their actions, for example how they live, who are they friends with, etc. It could be costly to avoid everything that sends a bad signal to the group, and to spend time mostly with other tribe members. But on internet, nobody knows what you do in the real life. And you can use a new identity for your group membership.
On the other hand, merely understanding the positions and arguments of the group may be not enough to pass the ideological turing test. Because group members can be recognized by more than just having the right ideology. You need to understand the jargon; understand and be able to make references to knowledge shared by group, which could be very large. Especially if you want to become (or pretend to be) a high-status member.
For example, if you pretend to be a Christian in a debate about abortion, and somewhere during the debate it shows that you never heard about e.g. Jesus walking on water, it becomes evidence that you are a fake Christian. Although walking on water is technically unrelated to abortions. So you can fail the test for an unrelated piece of data. There can be thousands of such pieces, and the group may use them to recognize their members. Learning all those pieces, that is a cost you can't completely avoid even online. (Good google skills may save the day sometimes, but won't protect you against the unknown unknowns.)
↑ comment by cataphract93 · 2013-05-12T23:13:46.633Z · LW(p) · GW(p)
I recall vaguely reading about two economists (with different views) in a game where they would answer some questions as if they were each the other.
Does anyone know what I'm talking about?
Replies from: gjm↑ comment by gjm · 2013-05-13T00:39:59.411Z · LW(p) · GW(p)
The idea of an "ideological Turing test" arose from a disagreement between Bryan Caplan and Paul Krugman (two economists with different views). Caplan and Krugman never (so far as I know) engaged in any such test, but some other economists with different views apparently have done. See, e.g., the Wikipedia article.
Replies from: buybuydandavis, cataphract93↑ comment by buybuydandavis · 2013-05-14T00:32:40.823Z · LW(p) · GW(p)
I think Haidt had a study with evidence in support of Caplan - conservatives could mimic liberals better than liberals could mimic conservatives.
↑ comment by cataphract93 · 2013-05-13T13:37:26.465Z · LW(p) · GW(p)
This is it! Thanks so much!
comment by Nornagest · 2013-05-15T01:12:30.742Z · LW(p) · GW(p)
One workaround would be to assign high confidence only to beliefs for which you have read n academic papers on the subject. For example, only assign 90% confidence if you've read ten academic papers.
Easy hack:
- Google "papers supporting $controversial_position"
- Read n papers linked in results
- Assign confidence proportional to n
- Enjoy cozy feelings of intellectual superiority
I don't think many people do this as such, but there are less self-aware versions of the same procedure that do happen in practice. For example, if you hang out on any reasonably intellectual partisan blog, links to related papers will probably come your way pretty often. If you read them as they arrive and update as suggested, in fairly short order you'll have read enough to assign high confidence to your preexisting opinions -- yet those opinions will never be seriously challenged, because all the information involved has been implicitly screened for compatibility before it gets anywhere near your head.
Your second criterion helps but I don't think it's sufficient; it's very easy to convince yourself that you understand the strongest opposing arguments as long as you've been exposed to simplified or popularized versions of them, which to a first approximation is true for everyone with opinions on controversial issues.
comment by DanArmak · 2013-05-13T19:34:12.936Z · LW(p) · GW(p)
I expect it would be hard to obtain good data about the actual results of implemented politics. (Not policies, which is a much more general term; just those policies adopted through a highly politicized process, like national laws or budget changes.)
This is for two reasons. First, major policy changes mostly happen when power changes hands, and a lot of changes happen together; their effects are hard to disentangle from one another.
Second, because most policies are attempts to influence human behavior. And people's reaction to political policies is itself politicized. People will react differently based on which party introduced a policy, or what other opinions it is publicly associated with.
This is just my prediction; I haven't checked it and I have no data to present in support.
comment by BerryPick6 · 2013-05-12T20:22:53.531Z · LW(p) · GW(p)
I was just thinking about this the other day, and I think this is a very good idea.
comment by [deleted] · 2013-05-12T23:52:03.262Z · LW(p) · GW(p)
there's no easy way to assess whether a belief is true or false, in contrast to the trivia questions in the credence game. Moreover, it’s very easy to become mindkilled by politics. What do you do?
Tentatively preserve what is working, based on regular testing to confirm it is still working. Cautiously adopt changes with an emphasis on how to minimize the ways they can go wrong, not the possible benefits if they go right. Test newly adopted policies regularly. This was how 'conservatism' might have described itself at one time, but that word has other meanings now.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-05-14T05:20:29.972Z · LW(p) · GW(p)
Also keep in mind that the system is sufficiently complicated that it won't always be easy to see that the cause of problem Y was change X.
comment by Bruno_Coelho · 2013-05-13T15:03:08.987Z · LW(p) · GW(p)
Mostly decisions about large scale intervetions could be settled in economics terms. However, MIRI recent strategic change to math questions inform us the difficulty of heuristics approach. The amount of resources needed to find very noisy information about factors for some kind of scenario, seems very costly.