Rational Repentance

post by Mass_Driver · 2011-01-14T09:37:11.613Z · LW · GW · Legacy · 154 comments

Contents

  Review:
  The Problem:
  Solutions:
None
154 comments

Related to: Branches of Rationality, Rationality Workbook

Changing your behavior to match new evidence could be harder than simply updating your beliefs and then mustering your willpower, because (a) we are in denial about how often we change our minds, (b) cognitive dissonance is tolerable in the medium-term, and (c) the additional monitoring required to verify that your actions as well as your beliefs have changed makes it easier for you to pretend that your actions are suitable to your reality. It might help to (1) specify a quitting point in advance, (2) demonstrate your new opinion with symbolic action, or (3) activate your emotions by reading non-rational propaganda. Additional solutions are eagerly solicited.

Disclaimer:

This post contains examples drawn from politics and current events. I do not hope to change anyone's mind about any specific political belief, I know that Politics is the Mind-killer, I have tried to use non-inflammatory language, and I have a good faith belief that this post contains actual content on rationalism sufficient to justify its potentially controversial examples. Equally powerful but less controversial examples will be cheerfully substituted if anyone can bring them to my attention.

Review:

As has been amply discussed in [? · GW] the [? · GW] sequences, a key tool for overcoming the human tendency to irrationally defend prior beliefs simply because they are comfortable is to ask what, if anything, would cause you to abandon those beliefs. For example, in the “invisible dragon in the garage” parable, it quickly becomes clear to neutral observers that there is no potential evidence that could convince an invisible-dragon-fundamentalist that the dragon is fictional. If you test for breathing noises, it turns out that the dragon is inaudible. If you test for ecological impact, it turns out that the dragon lives off of invisible hamsters, etc. Thus we say that the belief in the dragon is unfalsifiable; there is no way to falsify your hypothesis that there is a dragon in your garage, and so your belief in the dragon does not pay rent in anticipated experiences.
 
There is a second human bias that causes you to cache an unrealistically high summary statistic for how often you change your mind: you think you change your mind, in general, pretty often, but unless you are an expert, highly-practiced rationalist, odds are that you do not. As evidence, try thinking of the last time you changed your mind about something and force yourself to specify what you believed beforehand and what you believed afterward. Me personally, I haven't changed my mind about anything that I can remember since about November 10th, 2010, and I'm sure I've expressed thousands of opinions since then. The odds are long.

The Problem:

There is a third human bias that causes you to tell yourself that you have successfully changed your mind when you have not really done so. The adherent of the Reformed Church of Dragon leaves the garage door open, and cheerfully admits to anyone who asks that there is probably no such thing as an invisible dragon, yet she is unaccountably cautious about actually parking her car in the garage. Thus it is worth knowing not just how to change your mind, but how to change your habits in response to new information. This is a distinct skill from simply knowing how to fight akrasia, i.e., how to muster the willpower to change your habits in general.
 
One example of this failure mode, recently reported by Slate.com, involves American troops in Iraq: there are at least some regions in Iraq where many people strongly prefer not to have American troops around, and yet American troops persist in residing and operating there. In one such region, according to a former American soldier who was there, the people greeted the incoming foreigners with a large, peaceful protest, politely asking the Yankees to go home. When the request was ignored, locals began attacking the Americans with snipers and roadside bombs. According to the ex-soldier, Josh Steiber, the Americans responded not by leaving the region, but by ordering troops to shoot whoever happened to be around when a bomb went off, as a sort of reprisal killing. At that point, cognitive dissonance finally kicked in for Josh, who had volunteered for the military out of a sense of idealism, and he changed his mind about whether he should be in Iraq: he stopped following orders, went home, and sought conscientious objector status.

The interesting thing is that his comrades didn't, even after seeing his example. The same troops in the same town confronted with the same evidence that their presence was unwelcome all continued to blame and kill the locals. One of Josh's commanders wound up coming around to Josh's point of view to the extent of being able to agree to disagree and give Josh a hug, but still kept ordering people to kill the locals. One wonders: what would it take to get the commander to change not just his mind, but his actions? What evidence would someone in his position have to observe before he would stop killing Iraqis? The theory is that American military presence in Iraq is good for Iraqis because it helps them build democracy, or security, or their economy, or some combination. It's moderately challenging to concede that the theory could be flawed. But, assuming you have the rationalist chops to admit your doubt, how do you go about changing your actions to reflect that doubt? The answer isn't to sit at home and do nothing; there are probably wars, or at the very least nonviolent humanitarian interventions, that are worth sending people abroad for (or going yourself, if you're not busy). But if you can't change your behavior once you arrive on the scene, your doubt is practically worthless -- we could replace you with an unthinking, unquestioning patriot and get the same results. 

Another example was reported by Bill McKibben, author of Deep Economy, who says he happened to be in the organic farming region of Gorasin, Bangladesh the day an international food expert arrived to talk about genetically engineered "golden rice," which, unlike ordinary rice, is rich in Vitamin A and can prevent certain nutritional deficiency syndromes. "The villagers listened for a few minutes, and then they started muttering. Unlike most of us in the West who worried about eating genetically modified organisms, they weren't much concerned about 'frankenfood.' Instead, they instantly realized that the new rice would require fertilizer and pesticide, meaning both illness and debt. More to the point, they kept saying, they had no need of golden rice because the leafy vegetables they could now grow in their organic fields provided all the nutrition they needed. 'When we cook the green vegetables, we are aware not to throw out the water,' said one woman. 'Yes,' said another. 'And we don't like to eat rice only. It tastes better with green vegetables.'"

Bill doesn't say how the story ended, but one can imagine that there are many places like Gorasin where the villagers ended up with GMOs anyway. The November/December 2010 issue of Foreign Affairs has a pretty good article (partial paywall) about how international food donors have persisted in shipping grain -- sometimes right past huts full of soon-to-rot local stockpiles -- when what is really needed are roads, warehouses, and loans. One could argue that the continued funding of food aid at 100 times the ratio of food infrastructure aid, or the continued encouragement of miracle mono-crops in the face of local disinterest, simply reflects the financial incentives of large agricultural corporations. Considering how popular farmers are and how unpopular foreign aid is, though, there are doubtless easier ways for Monsanto and ConAgra to get their government subsidies. At least some of the political support for these initiatives has to come from well-intentioned leaders who have reason to know that their policies are counterproductive but who are unable or unwilling to change their behavior to reflect that knowledge.

It sounds stupid when people act this stubbornly on the global stage, but it is surprisingly difficult not to be stubborn. What if anything, would convince you to stop (or start) eating animals? Not merely to admit, verbally, that it is an acceptable thing for others to do, or even the moral or prudent thing for you to do, but to actually start trying to do it? What, if anything, would convince you to stop (or start) expecting monogamy in your romantic relationships? To save (or borrow) significant amounts of money? To drop one hobby and pick up another? To move across the country?

And, here's the real sore spot: how do you know? Suppose you said that you would save $1,000 a year if the real interest rate were above 15%. Would you really? What is your reference class for predicting your own behavior? Have you made a change like that before in your life? How did the strength of the evidence you thought it would take to change your behavior compare to the evidence it actually took to change your behavior? How often do you make comparably drastic changes? How often do you try to make such changes? Which are you more likely to remember -- the successful changes, or the failed and quickly aborted attempts?

Solutions:

Having just recently become explicitly aware of this problem, I'm hardly an expert on how to solve it. However, for whatever it is worth, here are some potential coping mechanisms. Additional solutions are strongly encouraged in the comments section!

1) Specify a quitting point in advance. If you know ahead of time what sort of evidence, E, would convince you that your conduct is counterproductive or strictly dominated by some other course of conduct, then switching to that other course of conduct when you observe that evidence will feel like part of your strategy. Instead of seeing yourself as adopting strategy A and then being forced to switch to strategy B because strategy A failed, you can see yourself as adopting the conditional strategy C, which calls for strategy A in circumstance E and for strategy B in circumstance ~E. That way your success is not dependent on your commitment, which should help reduce your commitment down toward an optimal level.

Without a pre-determined quitting point, you run the risk of making excuses for an endless series of marginal increases in the strength of the evidence required to make a change of action appropriate. Sunk costs may be an economic fallacy, but they're a psychological reality.

2) Demonstrate your new opinion with symbolic action. Have you decided to move to San Francisco, even though your parents and significant other swear they'll never visit you there? Great! We have nice weather here; look forward to seeing you as soon as you can land a job. Meanwhile, buy a great big map of our beautiful city and put it on your bedroom wall. The map, in and of itself, doesn't get you a damn bit closer to moving here. It doesn't purport to influence your incentives the way a commitment contract would. What it does do is help you internalize your conscious decision so the decision is more broadly endorsed by the various aspects of your psyche.

I remember at one point a religious camp counselor caught me using a glowstick on the Sabbath, and advised me to throw the glowstick away, on the theory that kindling a new light on the Sabbath violated the applicable religious laws. I asked him what good throwing away the light would do, seeing as it had already been kindled and would keep on burning its fixed supply of fuel no matter where I put it. He said that even though throwing away the light wouldn't stop the light from having been kindled (there were limits to his magical thinking, despite his religious convictions), it would highlight (har har) my agreement with the principle that kindling lights is wrong and make it easier not to do it again next time...the very sense that it is strange to throw away a lit glowstick helps put cognitive dissonance to work for changing your mind instead of against it: if you didn't strongly believe in the importance of not kindling glowsticks, why on earth would you have thrown it away? But you did throw it away, and so you must believe, and so on. Also, not reaping the benefits of the wrongly kindled light makes kindling lights seem to provide fewer benefits, and makes it easier to resist kindling it the next time -- if you know, in the moment of temptation, that even if you kindle the glowstick you might repent and not let yourself enjoy its light, you'll factor that into your utility function and be more likely to decide that the no-longer-certain future benefit of the light isn't worth the immediate guilt.

Anyway, this is a fairly weird example; I certainly don't care whether people light glowsticks, on a particular tribe's Sabbath or otherwise. I think it probably does help, though, to be a bit of a drama queen. If you buy a cake while you're dieting, don't just resolve not to eat it; physically throw it out the second-story balcony. If you've just admitted to yourself that your erstwhile political enemies actually have some pretty good points, write your favorite ex-evil candidate a letter of support or a $5 check and physically put it in the mail. As much as possible, bring your whole self into the process of changing your actions.

3) Over-correct your opinion by reading propaganda. Propaganda is dangerous when you read it in order to help you form an opinion, and a deontological evil when you publish it to hack into other peoples' minds (which, depending on circumstances and your philosophy, may or may not be justified by the good consequences that you expect will follow). But when you've already carefully considered the rational evidence for and against a proposition, and you feel like you've changed your mind, and yet you're still acting as if you hadn't changed your mind, propaganda might be just what you need. Read an essay that forcefully argues for a position even more extreme than the one you've just adopted, even if the essay is full of logical cul-de-sacs. In this limited context alone, gleefully give yourself temporary permission to ignore the fact that reading the essay makes you notice that you are confused. Bask in the rightness of the essay and the guilt/shame/foolishness/low-status that people who disagree with it should feel. If you gauge the dosage correctly, the propaganda might nudge your opinion just enough to make you actually adopt the new action that you felt would adequately reflect your new beliefs, but not enough to drive you over the cliff into madness.

As an example, I recently became convinced that eating industrially raised animals while living in San Francisco before the apocalypse can't ever be morally justified, and, yet, lo and behold, I still ate turkey sandwiches at Subway 5 times a week. Obviously I could have just used some of the tactics from the Akrasia Review to make eating less factory-meat a conscious goal...but I'm busy using those tools for other goals, and I think that there are probably at least some contexts in which willpower is limited, or at least a variable-sum game. So I read Peter Singer's book on Animal Liberation, and blamed all the woes of the world on steak for a few hours while slowly eating a particularly foul-tasting beef stew that was ruined by some Thai hole-in-the-wall, to reinforce the message. I'm doing a little bit better...I'm more likely to cross the street and eat vegetarian or pay for the free-range stuff, and I'm down to about 3 Subway footlongs a week, without any noticeable decrease in my willpower reserves.

Your mileage may vary; please use this tactic carefully.

4) Your suggestions.

Seriously; most of my point in posting here is to gather more suggestions. If I thought of the three best solutions in two hours with no training, I'll eat my shirt. And I will, too -- it'll help me repent.

154 comments

Comments sorted by top scores.

comment by Anatoly_Vorobey · 2011-01-15T20:32:23.145Z · LW(p) · GW(p)

I think your examples are terrible, and in part it's because they're political - but for a somewhat different reason than the one elaborated in Politics is the Mind-killer.

First, there's the mismatch between the problem you're addressing and the problem your examples illustrate. The problem you're addressing is how to make sure your behavior changes to match your updated beliefs. In this problem, your beliefs have already updated due to the weight of the evidence, but for some reason (and your list of plausible reasons is compelling) your habitual behavior fails to reflect this change in your beliefs. However, both your examples aren't about that at all - they're about beliefs not changing in the face of the evidence. Josh Stieber's fellow soldiers did not change their minds about whether they should be in Iraq. Your example actually appears to argue that they should have, if they behaved rationally - but whether or not it's true, there's no relevance to the problem your post addresses. At one point, you're doing a sleight of hand of sorts (unintentionally, I'm sure):

One of Josh's commanders wound up coming around to Josh's point of view to the extent of being able to agree to disagree and give Josh a hug, but still kept ordering people to kill the locals. One wonders: what would it take to get the commander to change not just his mind, but his actions?

But the commander didn't change his mind, not to the point that would necessitate changing his actions. He merely "agreed to disagree". So there's no one in the first example who's failing to update their behavior following an update to their beliefs.

With the second example it's even worse, because it's more vague. I'm not sure who here is supposed to have updated their actions but didn't - I think it's the international food donors, and, in particular, "well-intentioned leaders who have reason to know that their policies are counterproductive but who are unable or unwilling to change their behavior to reflect that knowledge". But the fact that their policies are counterproductive (granting that for the sake of the argument) is no evidence that they possess that knowledge, that they updated their beliefs accordingly. People do all kinds of counterproductive things all the time while maintaining their belief in their usefulness. To illustrate your problem, you need those food donors to have decided, under the weight of the evidence, that they're doing the wrong thing, yet to persist in doing it. I don't think you have anything like that in your example. Like the first one, it's primarily about people not updating their beliefs when they ought to, in the face of the evidence.

Now, as examples of people not changing their minds when the evidence is compelling, your two examples are terrible - primarily because they're political. And why this should be so is, I think, an interesting aside. It is not because using a political example tends to antagonize some of the readers needlessly - that by itself is true, and a good reason to avoid political examples while talking about rationality, but is only a minor factor here, to my mind. Much more important is this: the story of people failing to account for compelling evidence is by itself a familiar, ubiquitous, low-status specimen of political propaganda.

In fact, one of the most frequent arguments you encounter as you read political discussions is the argument that the other side are ignoring obvious facts, and so failing to behave rationally, because they're blinded by their ideology. To a first approximation, everyone believes that about everyone else. Take any well-divided political issue, and you'll find people on both sides building up detailed stories that show what it is exactly that ought to convince any reasonable person, but fails to convince their opponents due to their ideological bias. Such stories are almost always wrong. Typically they do one or several of: (i) exaggerate the evidence or misrepresent its degree of uncertainty; (ii) ignore conflicting evidence to the other direction; (iii) tacitly assume a host of underlying convictions that are only obvious to your side; (iv) ignore any number of ways the other side could find to explain your evidence without changing their beliefs, not all of them contrived.

Because of these problems, it's reasonable to treat the whole genre of political stories of the "they failed to think rationally" kind as low-status and corrupt. These stories are always preaching to the choir, and only to the choir. They should not, and typically do not, convince an independent rational observer, much less anyone from "the other side". (The only exception is when such a story explicitly includes an explanation as to how it manages to avoid (i)-(iv) above. When such an explanation is compelling, the story may be saved. I think that happens very rarely).

I refrain from pointing out how (i)-(iv) apply particularly in the case of your first and second examples, because I think compiling such a list is easy enough, and avoiding an explicitly political discussion is a virtue.

Replies from: zyxwvutsr, Eliezer_Yudkowsky, Perplexed, Plasmon
comment by zyxwvutsr · 2011-01-15T23:56:25.356Z · LW(p) · GW(p)

"Josh Stieber's fellow soldiers did not change their minds about whether they should be in Iraq."

None of us has any idea whether or not they changes their minds about anything. A soldier can hold a fully-formed (and informed) negative opinion about the strategic efficacy of their mission, but still follow orders and complete that mission.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-01-16T08:02:37.070Z · LW(p) · GW(p)

Consider rewriting this as a post?

Replies from: Plasmon
comment by Plasmon · 2011-01-16T09:05:01.161Z · LW(p) · GW(p)

the story of people failing to account for compelling evidence is by itself a familiar, >ubiquitous, low-status specimen of political propaganda.

In fact, one of the most frequent arguments you encounter as you read political >discussions is the argument that the other side are ignoring obvious facts, and so >failing to behave rationally, because they're blinded by their ideology. To a first >approximation, everyone believes that about everyone else.

It seems to me that many of the arguments made on this site based on or referring to the Politics is the Mind-Killer article are based on extrapolations from a single well-known highly-polarized (essentially) 2-party system, i.e. the USA.

I am from a country with many political parties. No party ever gets more than 50% of the votes, in fact it is rare for any party to get over 20% of the votes. The parties are always forced to form a coalition to make a majority government. This system is not without its flaws, and far be it from me to argue that it is superior to the American system.

Nevertheless, it seems to me that many of the failure modes of 'politics', as often described of this site, are actually failure modes of present-day American politics, and not of politics in general.

For example, I encounter the argument described above, that "other side are ignoring obvious facts, and so failing to behave rationally, because they're blinded by their ideology" very rarely, even in political discussions. Politicians saying such things would find it hard to negotiate with other politicians to form a government, and are mostly smart enough to not say such things. They would have no difficulty admitting that other politicians/parties behave differently simply because they have different goals (they represent the interests of a different set of voters), while still acting on almost the same set of evidence.

Replies from: Tyrrell_McAllister, wedrifid, Jack, ChristianKl
comment by Tyrrell_McAllister · 2011-01-17T23:46:57.232Z · LW(p) · GW(p)

For example, I encounter the argument described above, that "other side are ignoring obvious facts, and so failing to behave rationally, because they're blinded by their ideology" very rarely, even in political discussions. Politicians saying such things would find it hard to negotiate with other politicians to form a government, and are mostly smart enough to not say such things. They would have no difficulty admitting that other politicians/parties behave differently simply because they have different goals (they represent the interests of a different set of voters), while still acting on almost the same set of evidence.

  1. I would expect that some parties know that they will never form a coalition with certain other parties. If so, do these "incompatible pairs" show more inclination to accuse each other of ideological blindness?

  2. It sounds like people within your country are pretty ideologically homogeneous. But you must differ ideologically from other countries. Your homogeneity leads me to expect that your country is relatively small. This, in turn, means that, relative to a larger country, you probably have less control over the policies of other countries, but those policies have a greater effect on your country's interests. Does the "ideological blindness" explanation sometimes get invoked when talking about why people in other countries chose those policies? (For example, I have seen some people in European countries blame some of their economic problems on a world-wide economic meltdown caused by the free-market ideology of the United States.)

Replies from: Plasmon, wedrifid
comment by Plasmon · 2011-01-18T07:06:00.547Z · LW(p) · GW(p)

I would expect that some parties know that they will never form a coalition with certain other parties. If so, do these "incompatible pairs" show more inclination to accuse each other of ideological blindness?

There is a party that is shunned by most other parties because it is almost universally agreed upon to be a racist party (even by themselves in some cases). To a certain extent, the answer to your question is yes. Nevertheless, the present attempt to form a government involves negotiations between a somewhat right-wing separatist party in one part of the country (got almost 30% of the votes in that part) and a somewhat left-wing socialist (yes they call themselves socialists. It's not an insult in Europe) party. The negotiations have been going on for many months, and many colourful analogies have been used (yesterday I heard the separatists compared to Hannibal, and the socialists to the Romans), but I have yet to hear either of them accuse the other of ideological blindness.

It sounds like people within your country are pretty ideologically homogeneous.

Perhaps the ideology here is closer to mono-modal than the ideology in the USA. But is this ideological inhomogeneity in the USA a cause or a consequence of the political system? Politicians in a 2-party system have an incentive to polarize : it ensures they get a large amount of voters for their party, and then they just have to focus on the small amount of "swing voters" remaining in the center.

Your homogeneity leads me to expect that your country is relatively small.

True. I'm sure the Netherlands have a similar system. I don't know what the largest country with a true many-party system is.

Does the "ideological blindness" explanation sometimes get invoked when talking about why people in other countries chose those policies?

Yes.

Replies from: Vaniver, TheOtherDave
comment by Vaniver · 2011-01-18T07:30:10.085Z · LW(p) · GW(p)

True. I'm sure the Netherlands have a similar system. I don't know what the largest country with a true many-party system is.

Doesn't India have a many-party system? And since they're the largest democracy, I think we're done :P

Replies from: blogospheroid
comment by blogospheroid · 2011-01-20T11:16:46.243Z · LW(p) · GW(p)

This is true. Last 5 governments have been coalition governments.

comment by TheOtherDave · 2011-01-18T13:56:02.479Z · LW(p) · GW(p)

But is this ideological inhomogeneity in the USA a cause or a consequence of the political system?

It's a good question, and the polarizing effect of political parties certainly does work the way you describe.

That said, I do think the rural/urban divide in the US is a real split in terms of the kinds of public services and private contributions different communities value and expect, and the political parties have exacerbated that rather than created it.

Regardless, I agree with your main point about the polarizing effects of bicamerality.

comment by wedrifid · 2011-01-18T00:47:38.704Z · LW(p) · GW(p)

free-market ideology of the United States.

Some people in this country are more inclined to criticize certain failures to implement the free-market ideology.

comment by wedrifid · 2011-01-16T11:23:56.882Z · LW(p) · GW(p)

Thanks for pointing out another perspective, there could be something to it. Which country are you from, if you don't mind me asking?

(Note that I think politics is always a mind killer, however I usually think of the problem more in terms of social politics and moral wrangling in general than governmental politics specifically.)

Replies from: Plasmon
comment by Plasmon · 2011-01-18T06:46:05.571Z · LW(p) · GW(p)

Which country are you from, if you don't mind me asking?

Belgium

comment by Jack · 2011-01-28T18:37:12.123Z · LW(p) · GW(p)

This is an interesting theory and the two-party system may exacerbate the problem. Great Britain, however, has essentially a two party system (Clegg's relatively new, barely relevant, ideologically indistinct party doesn't really count) and they seem to have about the same level of rationality in their politics as most of multi-party Europe. As others suggested, I suspect the difference has much more to do with the United States cultural, economic and racial diversity than anything else. America is a single tribe to a far lesser extent than other countries- even our white majority, which is smaller than it is in most of Europe consists of four genetically and culturally distinct traditions (and that isn't including Hispanic). This kind of diversity means that we have less in common to start from and have resolved fewer basic issues. We've never gotten around to European style social welfare for much the same reason- that kind of altruism isn't supported for those outside of the tribe. We're also large enough and wealthy enough to support more fractured news media environment- which lets people insulate themselves from opposing view points.

This does suggest that discussion of politics could be more successful on Less Wrong (given how much we all have in common) but having to work over the internet involves other difficulties.

I would be interested to see, however, whether the differing political climates influence the way people talk about politics. We could select some posters from Northern Europe and some posters from America. Have them discuss a series of emotional and controversial political issues. Have another group evaluate their comments (with the anti-kibitzer on) and grade them by degree of motivated cognition and mind-killing rhetoric. See if the Europeans do better.

comment by ChristianKl · 2011-01-28T15:46:18.749Z · LW(p) · GW(p)

The US is essentially a zero party system. Passing laws in the senate requires 2/3 of the votes with usually means that politicians from both parties have to support the legislation.

US politicians have no problem with having discussions in private. They all believe in doing realpolitik. It's their public rhetoric that differs.

Replies from: jimrandomh
comment by jimrandomh · 2011-01-28T17:38:54.318Z · LW(p) · GW(p)

The US is essentially a zero party system. Passing laws in the senate requires 2/3 of the votes with usually means that politicians from both parties have to support the legislation.

Not true; laws can pass with as few as 1/2 of the votes (51). However, this is increased to 60 if the opposing side chooses to filibuster (which non-selectively blocks all legislation), and it's increased to 2/3 if the President chooses to veto it. Use of the filibuster was rare before Obama came into office, at which point the Republican party adopted a policy of using it constantly.

Replies from: ChristianKl
comment by ChristianKl · 2011-02-02T22:58:33.275Z · LW(p) · GW(p)

Okay 60 isn't 2/3 but it's still the votes that you need to prevent a filibuster.

To prevent the opposing site from filibustering you need to be able to speak with them.

comment by Perplexed · 2011-01-28T19:02:12.610Z · LW(p) · GW(p)

Take any well-divided political issue, and you'll find people on both sides building up detailed stories that show what it is exactly that ought to convince any reasonable person, but fails to convince their opponents due to their ideological bias. Such stories are almost always wrong. Typically they do one or several of:
(i) exaggerate the evidence or misrepresent its degree of uncertainty;
(ii) ignore conflicting evidence to the other direction;
(iii) tacitly assume a host of underlying convictions that are only obvious to your side;
(iv) ignore any number of ways the other side could find to explain your evidence without changing their beliefs, not all of them contrived.

A good analysis of what it is that makes politics (or at least American politics) a mind killer. In fact, worse than a mind killer. The habit of convincing yourself that those who disagree with you are subrational (and intellectually dishonest to boot) is the community killer - it is the first step in a rationalization of disenfranchisement.

Are there other subjects besides politics which lead to the same dehumanization of the people who disagree? I think so. One sees it frequently in theological disputes, pretty often in ethical disputes, and occasionally when discussing interactions between the sexes. But very rarely in discussions of the arts, music, spectator sports teams, grammar, and even nutritional practices - even though tribalism is common enough in these areas, no one tries to paint their opponents as either fools or knaves. Why the difference - is it just because these topics are less important than politics?

According to Aumann, we should be able to agree to disagree only if one of the following is the case:

  • We have different priors (or different fundamental values)
  • One of us is irrational
  • We don't trust each other to report facts and beliefs truthfully
  • We just don't talk enough.

So, if Aumann is to be believed, in those cases where we do talk enough, and in which we claim to share priors and fundamental values, disagreement is likely to turn nasty.

ETA: HT to Plasmon for pointing out the counter-intuitive fact that disagreement may be less nasty when divergence of fundamental values is acknowledged.

Replies from: NancyLebovitz, Jack, Nornagest
comment by NancyLebovitz · 2011-01-28T19:25:10.566Z · LW(p) · GW(p)

I don't think the current state of American politics is a result of structural problems-- it's gotten a lot worse as far as I can tell in the past decade or so. I don't know who started it, or who's done the most to amplify matters, but I think Republicans and Democrats have become a lot more contemptuous of each other.

Replies from: Oligopsony, Eugine_Nier, Perplexed, Jack
comment by Oligopsony · 2011-01-28T20:02:06.107Z · LW(p) · GW(p)

American politics has gotten steadily more partisan over the last several decades, mostly as a result of desegregation. While the south was under an apartheid regime many Republicans ("Rockefeller Republicans") were to the left of Democrats ("Dixiecrats.") This is no longer the case; every Democratic senator is to the left of every Republican senator - if you have strong politics yourself, the absolute distance looks small, but the lack of mixture is an undeniable fact. The decreased importance of regional party machines plays into this as well. Parties now function much more like coherent policy packages, so legislators have less allies outside of their own party.

Replies from: Jack
comment by Jack · 2011-01-28T20:13:47.202Z · LW(p) · GW(p)

While the south was under an apartheid regime many Republicans ("Rockefeller Republicans") were to the left of Democrats ("Dixiecrats.") This is no longer the case; every Democratic senator is to the left of every Republican senator

Desegregation isn't irrelevant to what has happened to American politics- but this doesn't have anything to do with where senators are on an arbitrary political spectrum. The particular manifestation of the left-right political spectrum you have in mind here is the invention of the post-segregation political climate. Pre-desegregation issues didn't break down into positions corresponding to our current political spectrum.

Replies from: Oligopsony
comment by Oligopsony · 2011-01-29T14:37:01.377Z · LW(p) · GW(p)

Pre-desegregation issues didn't break down into positions corresponding to our current political spectrum.

That's probably a better way of phrasing it. Perhaps I should have said that great majority of variance in political opinion today can be explained with one eigenvector while pre-segregation it would have taken two. Either way, the greater level of ideological coherence is responsible, I think.

comment by Eugine_Nier · 2012-03-06T08:18:58.630Z · LW(p) · GW(p)

I suspect that's just nostalgia filter.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-03-06T11:48:12.254Z · LW(p) · GW(p)

Hard to prove-- I'm not nostalgic in general though. For example, I think food's generally gotten a lot better since the 90s.

A lecture about political rhetoric which shows that the nastiness level can change over time-- in particular, it goes into detail about shifts in which words got used in political discourse during the Nazi era.

I can tell you with certainty that Republicans and Democrats didn't used to have nasty names (Rethuglicans, Libtards) for each other.

comment by Perplexed · 2011-01-28T20:39:50.102Z · LW(p) · GW(p)

So, if Aumann is to be believed, in those cases where we do talk enough, and in which we claim to share priors and fundamental values, disagreement is likely to turn nasty.

it's gotten a lot worse as far as I can tell in the past decade or so.

I agree it has gotten worse, though I would trace it back at least to the Bork nomination fight. So, if I want to stick to my AAT-based explanation of the facts, I need to claim either that we have only recently started claiming to have the same fundamental values, or that we are talking more.

I believe that there has been a convergence regarding claimed values, over that period, but the situation regarding communication is more complicated. Political activists (and they are exactly the people who have poisonous attitudes about the opposition) probably do communicate more, but they do so over completely distorted channels. Democrats learn about what Republicans are saying from the Daily Show, the Onion, and Pharyngula. Republican learn what Democrats are saying from Rush Limbaugh and Glenn Beck. I suppose the real question is why today's activists seem to think that these channels are sufficient.

Perhaps people would always have preferred those kinds of channels, but in the past they just weren't available.

Replies from: Nornagest
comment by Nornagest · 2011-01-28T21:08:25.370Z · LW(p) · GW(p)

Perhaps people would always have preferred those kinds of channels, but in the past they just weren't available.

Talk radio's been around for a while, and TV pundits only a little less so, so I'd hesitate to blame either one. The political blog scene might be more directly involved; it's highly polarized, has excellent visibility among politically aware individuals, tends to be kind of incestuous, and coincides roughly with the 10-year timeframe we're discussing.

comment by Jack · 2011-01-28T20:16:33.202Z · LW(p) · GW(p)

I think existing structural problems were dramatically magnified by the modern media environment. The growth of politically involved evangelicalism is also relevant.

comment by Jack · 2011-01-28T19:26:41.171Z · LW(p) · GW(p)

Except in American politics all of those are always the case. You just can't agree to disagree when the outcome of the argument influences who gets to be in charge of how much people are taxed, how much people get through social welfare and who gets thrown in prison.

Lets not make the mistake of thinking political discourse is in anyway about trying convince your opponents to change your minds- it's about trying to convince the small portion of the electorate that hasn't made up its mind that your opponents can't be trusted.

Actually... it's a prisoners dilemma and that might explain why the problem is worse in the American system. Cooperating would be communicating and debating honestly to sort out who is right. Defecting means using lies, distortions and nefarious tactics to look better than your opponent. Cooperation would make both parties look better but either party increases their chances of victory by defecting. And if you think the other party is going to defect you have to defect or else you'll lose. So the strategy of of Domination leads to both parties defecting, as in the prisoner's dilemma.

But in a multi-party system you a) have other agents that can punish defectors by not forming coalitions with them and b) a means by which the electorate can punish defectors... they have someone else to vote for. So the game here is the prisoner's dilemma with additional agents able and willing to punish defectors.

This actually seems like a sound structural analysis most of us could agree on- perhaps these kind of institutional questions can provide a rational foothold on political questions.

Replies from: Perplexed
comment by Perplexed · 2011-01-28T21:00:22.685Z · LW(p) · GW(p)

Your Prisoner's Dilemma argument seems appealing - until you realize that electoral politics is an iterated game. The two players ought to be able to achieve an agreement. It is definitely not a zero-sum game. Both parties have a shared interest in keeping the country governable. They have apparently already discovered the virtues of Tit-for-Tat retaliation. Now if only the electorate were to provide a little added payoff to whichever side first makes an effort to be 'nice'.

I once attended a business (soft skills) training seminar in which a variant of the Prisoner's Dilemma was played. Two teams played PD against each other. But, within each team, it required a consensus decision (100% vote) to cause the team to cooperate. If any team member votes to defect, then the team as a whole must defect. The relevance to the question of civility between political parties should be obvious.

Replies from: Jack
comment by Jack · 2011-01-28T21:28:37.357Z · LW(p) · GW(p)

until you realize that electoral politics is an iterated game.

Only if you model each political party as the same entity over time. But Presidents are term-limited and losing in a general election often means a leadership change for the party. For some individual legislators the relevant time horizon is never more than two years away (and as in your training seminar, it only takes a few bad apples).

It is definitely not a zero-sum game. Both parties have a shared interested in keeping the country governable.

But this is a game-of-chicken-like incentive. They have incentive to swerve when the cars get too close, like maybe they'll sit together for a speech after one of them is nearly killed in an assassination attempt; but that isn't sufficient for general cooperation.

Now if only the electorate were to provide a little added payoff to whichever side first makes an effort to be 'nice'.

Sure, it would be nice if defecting was counter productive- but the fact that the electorate always falls for the defection is what makes it a prisoner's dilemma.

In any case, at this point both parties (though, I'd say the Republicans in particular) have pre-committed to defecting for the foreseeable future. When you use dehumanizing rhetoric to describe the opposition your allies will see compromise as treachery. In this case, you'll face a well-funded primary challenge from your party's ideological extreme. This can be useful if you want to be pre-committed into voting a particular way- but obviously it is extremely dangerous when used in a semi-iterated prisoner's dilemma with certain high risks associated with D/D.

Every time I interact with you I think for a minute that you must be from Russia... heh.

Replies from: Perplexed
comment by Perplexed · 2011-01-29T00:41:48.485Z · LW(p) · GW(p)

[Modeling political partisanship as a PD makes sense] until you realize that electoral politics is an iterated game.

Only if you model each political party as the same entity over time.

Thx for that insight. I'll try to use it in my continuing struggle to promote discounting of expected future utilities.

Every time I interact with you I think for a minute that you must be from Russia... heh.

Oh, I'm even more alien than that. I used to be a Republican!

Replies from: Jack
comment by Jack · 2011-01-29T01:36:44.698Z · LW(p) · GW(p)

Oh, I'm even more alien than that. I used to be a Republican!

Ha!

Though just to be clear since I might have gotten a downvote or two for the grandparent... I don't mean to just be trashing Republicans. I think my claim that they are more pre-committed to defecting for the foreseeable future is justified by an objective consideration of the strength and organization of their class of activists and ideologues versus that of the Democrats. I don't think it is mind-killing bias leading me to the conclusion that the Tea-party has had much greater success recently than the netroots or whatever you want to call the equivalent on the Left. I didn't mean anything evaluative beyond that (I have my opinions but those probably are subject to bias).

(For the record I used to be a partisan, Left-wing Democrat. Now I'm vaguely aligned with that party but mostly for cultural and foreign policy reasons. Where I live, your vote doesn't count if you're not a Democrat. Ideologically I'm basically at the liberal-libertarian nexus.)

comment by Nornagest · 2011-01-28T20:15:41.021Z · LW(p) · GW(p)

Why the difference - is it just because these topics are less important than politics?

That's a really interesting question.

The Aumann analysis works well for politics. It works well for some theological questions, too: it's a handy explanation for why schismatic branches of a religion often become mutually antagonistic, for example. It isn't quite a complete description of antagonism when conformity with dogma is a fundamental value, but it's easy to augment Aumann with that.

When it comes to cultural disagreements, though -- arts, music, sports teams -- there's a tacit understanding that people's priors are different. Appreciating that sort of thing isn't just about the immediate experience; it can vary depending on who you're trying to impress, and also on immutable products of upbringing and convenience. And people accept this. No one expects a resident of Oregon to be a Green Bay Packers fan, unless the Packers have been having a particularly good year -- and even that comes with a status penalty associated with the expectation of future defection.

comment by Plasmon · 2011-01-16T09:03:45.676Z · LW(p) · GW(p)

the story of people failing to account for compelling evidence is by itself a familiar, >ubiquitous, low-status specimen of political propaganda.

In fact, one of the most frequent arguments you encounter as you read political >discussions is the argument that the other side are ignoring obvious facts, and so >failing to behave rationally, because they're blinded by their ideology. To a first >approximation, everyone believes that about everyone else.

It seems to me that many of the arguments made on this site based on or referring to the Politics is the Mind-Killer article are based on extrapolations from a single well-known highly-polarized (essentially) 2-party system, i.e. the USA.

I am from a country with many political parties. No party ever gets more than 50% of the votes, in fact it is rare for any party to get over 20% of the votes. The parties are always forced to form a coalition to make a majority government. This system is not without its flaws, and far be it from me to argue that it is superior to the American system.

Nevertheless, it seems to me that many of the failure modes of 'politics', as often described of this site, are actually failure modes of present-day American politics, and not of politics in general.

For example, I encounter the argument described above, that "other side are ignoring obvious facts, and so failing to behave rationally, because they're blinded by their ideology" very rarely, even in political discussions. Politicians saying such things would find it hard to negotiate with other politicians to form a government, and are mostly smart enough to not say such things. They would have no difficulty admitting that other politicians/parties behave differently simply because they have different goals (they represent the interests of a different set of voters), while still acting on almost the same set of evidence.

comment by wedrifid · 2011-01-14T12:35:29.184Z · LW(p) · GW(p)

There is a second human bias that causes you to cache an unrealistically high summary statistic for how often you change your mind: you think you change your mind, in general, pretty often, but unless you are an expert, highly-practiced rationalist, odds are that you do not. As evidence, try thinking of the last time you changed your mind about something and force yourself to specify what you believed beforehand and what you believed afterward. Me personally, I haven't changed my mind about anything that I can remember since about November 10th, 2010, and I'm sure I've expressed thousands of opinions since then. The odds are long.

It is interesting to hear you say that. I would not go as far as to contradict you but I would be equally unsurprised to find out that I changed my mind more than I thought I did. This too is a human bias that crops up all the time, albeit in different circumstances. People are quite capable of completely changing their beliefs to a new belief that they sincerely believe they had all along.

This is a miscalibration that can go either way depending on which way the ego is pulling at the time.

Replies from: atucker, bentarm, bgaesop
comment by atucker · 2011-01-15T14:43:02.577Z · LW(p) · GW(p)

I think that most people don't really update their far beliefs particularly frequently, but when in near mode will completely contradict their far beliefs when acting. Does that count as changing their mind?

Or do you also mean that people just consciously change their mind too?

Replies from: wedrifid
comment by wedrifid · 2011-01-15T20:39:36.751Z · LW(p) · GW(p)

Or do you also mean that people just consciously change their mind too?

I think I mean "unconsciously change their conscious beliefs". As an example I have found myself arguing on a different side of a debate to what I had argued in the past and thought "Oh, look at me. I'm all human and stuff with the changing my mind after the fact."

comment by bentarm · 2011-01-16T13:52:34.805Z · LW(p) · GW(p)

Excellent point!

This is one of those things where, on first reading, I just accepted the OP's assertion without question, but now having had it pointed out to me, I want data! So, if anyone knows, it must be someone at LW. Do people change their minds more often or less often than they think? For what values of "change their minds"?

comment by bgaesop · 2011-01-15T21:01:02.777Z · LW(p) · GW(p)

I would like to read something more in depth about this. Could you write up a post, or link to an article about it or something?

comment by steven0461 · 2011-01-14T22:17:05.090Z · LW(p) · GW(p)

I found the use of political examples grating, and wish we could enforce the "no politics" guideline more consistently.

Replies from: wedrifid, shokwave, bgaesop
comment by wedrifid · 2011-01-16T03:18:20.359Z · LW(p) · GW(p)

I found the use of political examples grating

The most grating part was that they relied on entirely naive assumptions. You don't need to posit 'don't change your mind' bias on the part of Josh Steiber's peers. Just that none of them were under the misapprehension that they had joined the Salvation Army.

and wish we could enforce the "no politics" guideline more consistently.

Consistently enforced 'guideline'? Something in there verges on oxymoronic.

Replies from: None, steven0461
comment by [deleted] · 2011-01-16T06:28:11.228Z · LW(p) · GW(p)

Just that none of them were under the misapprehension that they had joined the Salvation Army.

The soldier's notion that he would not be expected to participate in bloody reprisals and violating other people's preferences was hopelessly naive historically speaking.

comment by steven0461 · 2011-01-16T03:26:40.266Z · LW(p) · GW(p)

Fair enough; when I edited "rule" to "guideline" I should also have edited "enforce" to "follow".

Replies from: wedrifid
comment by wedrifid · 2011-01-16T03:38:25.004Z · LW(p) · GW(p)

Now that is a sentiment that I can endorse.

comment by shokwave · 2011-01-15T01:34:12.342Z · LW(p) · GW(p)

The Iraq example was good and added to the post. I could go either way on the agriculture example. "We could replace you with an unthinking, unquestioning patriot and get the same result" could possibly be "unthinking, unquestioning automaton", but wouldn't cause the same feeling for me in the pit of my stomach, the "I really don't want to produce those results" feeling.

Replies from: BillyOblivion, Nornagest
comment by BillyOblivion · 2011-01-16T03:47:49.721Z · LW(p) · GW(p)

The Iraq example was awful because it is a very charged issue with people lying DEEPLY on both sides. There are a lot of people (myself included) who have been there, and who have either seen the same thing and gotten different impressions (and hence beliefs) about it, or people who have seen very different things and of course come away with different beliefs.

What Stieber did was an example of someone coming to a conclusion that their actions were wrong (not irrational as a large part of why he thought they were wrong was that people around him were acting contrary to his beliefs and their stated beliefs and acting from emotion rather than reason) (as an aside much of his conversion seems to his christian beliefs, which I respect more than most people here seem to) and changing what they were doing because of it at a very expensive cost, however it is a bad example because there are very logical reasons why what he did was wrong and those get in the way of understanding what the author's point is.

It would be like me arguing that I realized my diet where I got most of my calories from starches and sugars was wrong, so I switched to a diet much heavier in meat and fresh vegetables, and that eating things like soy and wheat, because of things like gluten, phyto-estrogens, and phytic acid, are bad for you. Now, it is true that I recognized a problem, did some research, evaluated the evidence and made changes to my diet. This will be ignored in certain circles in favor of the position that EATING MEAT IS WRONG.

It is hard to get past the position (in my mind) that what Stieber did was wrong, and just deal with the point the author is making--that someone came to a decision and then made a change.

There is also the problem that the Author slightly mis-represents the facts presented in the article. The people in Baghdad didn't say "Yankee's go home"--they suggested that they did not want Americans in their part of Baghdad. That is a very different thing.

This is actually a very subtle form of propaganda, and of signaling. It's very rude.

(edited to fix a grammatical error)

comment by Nornagest · 2011-01-15T07:08:15.386Z · LW(p) · GW(p)

This must be weighed against the proportion of the audience in whom such a phrase would inspire exactly the opposite reaction (or, more likely, a stronger but opposite one). Though it's not the phase itself but the associations the phrase triggers that'd do the damage; few people want to be unthinking adherents of anything but many have heard phrases like "unthinking and unquestioning" used to describe their political allies.

No idea what those proportions would be here, though.

comment by bgaesop · 2011-01-15T20:53:32.271Z · LW(p) · GW(p)

I find the no politics guideline a bit odd. I mean, shouldn't a rational humanist arrive at certain political positions? Why not make those explicit?

Replies from: TheOtherDave, None, Dreaded_Anomaly, satt
comment by TheOtherDave · 2011-01-15T21:06:22.182Z · LW(p) · GW(p)

I agree that the exercise of converging, based on a consideration of plausible consequences of plausible alternatives, on a set of policy positions that optimally support various clearly articulated sets of values, and doing so with minimal wasted effort and deleterious social side-effects, would be both a valuable exercise in its own right for a community of optimal rationalists, and a compelling demonstration for others of the usefulness of their techniques.

I would encourage any such community that happens to exist to go ahead and do that.

I would be very surprised if this community were able to do it productively, though.

Replies from: benelliott
comment by benelliott · 2011-01-16T12:24:40.375Z · LW(p) · GW(p)

I don't think you're right about it being a compelling demonstration of their techniques. People who already agreed precisely with the conclusions drawn might pretend to support them for signalling purposes, and everyone else would be completely alienated.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-01-16T15:35:14.568Z · LW(p) · GW(p)

That's certainly a possibility, yes.

For my own part, I think that if I saw a community come together to discuss some contentious policy question (moral and legal implications of abortion, say, or of war, or of economic policies that reduce disparities in individual wealth, or what-have-you) and conduct an analysis that seemed to me to avoid the pure-signaling pitfalls that such discussions normally succumb to (which admittedly could just be a sign of very sophisticated signaling), and at the end come out with a statement to the effect that the relevant underlying core value differences seem to be the relative weighting of X, Y, and Z; if X>Y then these policies follow, if Y>X these policies, and so on and so forth, I would find that compelling.

But I could be wrong about my own reaction... I've never seen it done, after all, I'm just extrapolating.

And even if I'm right, I could be utterly idiosyncratic.

Replies from: handoflixue
comment by handoflixue · 2011-01-18T23:11:29.067Z · LW(p) · GW(p)

I used to participate in a forum that was easily 50% trolls by volume and actively encouraged insulting language, and I think I got a more nuanced understanding of politics there than anywhere else in my life. There was a willingness to really delve in to minutia ("So you'd support abortion under X circumstances, but not Y?" "Yes, because of Z!"), which helped. Oddly, though, the active discouragement of civility meant that a normally "heated" debate felt the same as any other conversation there, and it was thus very easy not to feel personally invested in signaling and social standing (and anyone that did try to posture overly much would just be trolled in to oblivion...)

Replies from: jfm
comment by jfm · 2011-01-19T20:39:47.625Z · LW(p) · GW(p)

I used to participate in such a forum, politicalfleshfeast.com -- it was composed mainly of exiles from DailyKos. Is this perhaps the same forum you're talking about?

comment by [deleted] · 2011-01-16T06:22:26.544Z · LW(p) · GW(p)

I find the no politics guideline a bit odd. I mean, shouldn't a rational humanist arrive at certain political positions? Why not make those explicit?

Politics is nearly all signalling. Positions that send good signals only occasional overlap with positions that are rational.

Also the other apes will bash my head in with a rock so I really need to seem to be right even if I'm wrong. Being right on politics and the other side being wrong is a matter of life and death.

Replies from: bgaesop
comment by bgaesop · 2011-01-16T19:24:17.977Z · LW(p) · GW(p)

Talking about politics may be mostly signaling, but politics itself-that is, the decisions made and policies enacted-is something else that is really, really important.

If you care about the future of humanity and you have examined the evidence, then you should be concerned about global warming. I don't understand how that statement should be any more controversial than being concerned about the Singularity.

Replies from: None
comment by [deleted] · 2011-01-17T03:33:41.097Z · LW(p) · GW(p)

Talking about politics may be mostly signaling, but politics itself-that is, the decisions made and policies enacted-is something else that is really, really important.

Then I will get back to you as soon as I have meaningful influence over any policies enacted.

Replies from: bgaesop
comment by bgaesop · 2011-01-17T19:46:11.952Z · LW(p) · GW(p)

Good point. One interesting thing you can do is advocate for or attempt to participate in a revolution: the odds may be very low of succeeding, but the payoff of successfully succeeding could be almost arbitrarily large, and so the expected utility of doing so could be tremendous.

comment by Dreaded_Anomaly · 2011-01-15T21:20:17.553Z · LW(p) · GW(p)

One would think so, but there seem to be many libertarians here.

Replies from: None
comment by [deleted] · 2011-01-16T06:23:51.952Z · LW(p) · GW(p)

Upvoted for self-aware irony.

comment by satt · 2011-01-15T22:11:35.135Z · LW(p) · GW(p)

Which certain political positions did you have in mind?

Replies from: bgaesop
comment by bgaesop · 2011-01-15T22:29:11.261Z · LW(p) · GW(p)

Well, for example, one should oppose the use of torture. Torture is Bad because it in and of itself reduces someone's utility, and because it is ineffective and even counterproductive as a means of gathering information, and so there isn't a trade off that could counteract the bad effects of torture.

Replies from: wedrifid, ArisKatsaris, shokwave
comment by wedrifid · 2011-01-16T03:10:22.697Z · LW(p) · GW(p)

The word you are looking for is 'nice', not 'rational'.

Replies from: scav, bgaesop
comment by scav · 2011-01-17T09:24:26.785Z · LW(p) · GW(p)

Hmm. I suspect there's a tiny little bias, possibly politically influenced, whereby signalling that you are nice implies signalling that you are irrational: naive, woolly-minded, immature, not aware of how the world really works, whatever.

But it is rational for us to oppose torture because public acceptance of torture is positively correlated with the risk of members of the public being tortured. And who wants that? It is also negatively correlated with careful, dispassionate, and effective investigation of terrorism and other crimes.

I also oppose it because I love my neighbour, an ethical heuristic I would also defend, but it's not to the point in this case.

comment by bgaesop · 2011-01-16T19:24:52.002Z · LW(p) · GW(p)

That was assumed when I said that the person we're describing is a humanist.

Replies from: wedrifid
comment by wedrifid · 2011-01-16T22:12:14.629Z · LW(p) · GW(p)

I suppose then that the site that your conclusion would apply to would be humanistcommunity.org, not lesswrong. ;)

comment by ArisKatsaris · 2011-01-17T09:55:26.453Z · LW(p) · GW(p)

If you could convince people that it's ineffective and counterproductive, they wouldn't even need to be rationalists or even humanists in order to oppose it. So your opposition to torture (which I also oppose btw) doesn't seem like a conclusion that a rationalist is much more likely to arrive at than a non-rationalist -- it seems primarily a question of disputed facts, not misapplied logic.

There's one point that seems to me a failure of rationalism on the part of pro-torture advocates: they seem much more likely to excuse it away in the case of foreigners being tortured than in the case of their own countrymen. If the potential advantages of torture are so big, shouldn't native crimebosses and crooks also be tortured for information? This to me is evidence that racism/tribal hostility is part of the reason that they tolerate the application of torture to people of other nations.

Btw, I find "reduces someone's utility" a very VERY silly way to say "it hurts people".

Replies from: Vaniver
comment by Vaniver · 2011-01-17T10:20:38.492Z · LW(p) · GW(p)

Btw, I find "reduces someone's utility" a very VERY silly way to say "it hurts people".

Indeed, as revealed preferences show us that not torturing people reduces many people's utility. It is a stretch to say it hurts them, however.

comment by shokwave · 2011-01-16T15:33:12.433Z · LW(p) · GW(p)

one should oppose the use of torture.

It would be trivial for me to construct a hypothetical where torture is unambiguously a good idea. It wouldn't even be hard to make it seem a realistic situation; I might even be able to use a historical example. To call something generally irrational, or to claim that rationality is opposed to a thing, you have to make the argument that in principle it's not possible for this to be either a terminal goal or the only available instrumental goal.

Replies from: scav, bgaesop
comment by scav · 2011-01-17T09:35:26.550Z · LW(p) · GW(p)

I think the original claim was that political opposition to torture was rational, assuming we are talking about the use of torture by the state to investigate crimes or coerce the population, domestic or abroad. That's a less strong claim, and fairly reasonable as long as you allow for the unstated assumptions.

It would be trivial for me to construct a hypothetical where torture is unambiguously a good idea.

A much stronger claim, IMO

comment by bgaesop · 2011-01-16T19:26:51.740Z · LW(p) · GW(p)

I'd be really curious to see this example, given that it's an established fact that torture straight up doesn't work as a means of gathering information.

Replies from: shokwave
comment by shokwave · 2011-01-17T04:24:16.787Z · LW(p) · GW(p)

Torturing someone to scare others into compliance.

To make it realistic: enemy soldiers captured as prisoners of war. In order to keep them from staging a breakout and slaughtering the civilians in the large town you're defending, you torture the ringleader of the attempt - publically and painfully sending a message.

Historically: Keelhauling for mutineers on sea vessels.

Replies from: scav
comment by scav · 2011-01-17T09:37:29.940Z · LW(p) · GW(p)

Unconvincing. You haven't demonstrated that torture will result in the best outcome, even in a hypothetical situation where the participants are already Doing It Badly Wrong.

Replies from: Vaniver
comment by Vaniver · 2011-01-17T10:27:19.936Z · LW(p) · GW(p)

He did demonstrate that bgaesop's reported fact applies in a limited domain, and that torture supposedly has other uses.

comment by TheOtherDave · 2011-01-14T14:10:57.182Z · LW(p) · GW(p)

Nice.

I would add to your list: choose an appropriate community.

If I wanted to stop/start eating animals, I think the single most effective thing I could do would be to start hanging out in a community of vegetarians/omnivores. (Especially if I considered it the moral/prudent thing to do, though it would work about as well either way.)

Similarly, my social circle is at this point largely polyamorous. My own relationship is not, essentially because neither I nor my husband have any particular interest in inviting a third person into it -- we barely manage to find adequate time and energy to maintain one healthy relationship! -- but the existence of a social norm supporting it has certainly changed how easy it would be to do so: if we wanted to start seeing other people, it would be no more complicated than simply mentioning the fact to our friends, and the social reaction would be roughly on a par with when we got married.

That said, this approach suffers from the "you can't change your behavior once you arrive on the scene" problem in a big way.

Replies from: Nisan
comment by Nisan · 2011-01-14T17:44:58.087Z · LW(p) · GW(p)

Yep, I was going to suggest this.

"you can't change your behavior once you arrive on the scene"

What does this mean?

Replies from: TheOtherDave
comment by TheOtherDave · 2011-01-14T18:04:35.290Z · LW(p) · GW(p)

It was a reference to the original post; the story about Josh Steiber. What I mean is, if I choose a community in order to reinforce a lifestyle, I make it more difficult to extract myself from that lifestyle if I later choose my mind. It's a powerful solution, but it's not a flexible one.

comment by JoshuaZ · 2011-01-14T17:26:38.663Z · LW(p) · GW(p)

Off-topic halachic minutia:

I remember at one point a religious camp counselor caught me using a glowstick on the Sabbath, and advised me to throw the glowstick away, on the theory that kindling a new light on the Sabbath violated the applicable religious laws.

It sounds to me like your camp counselor was ignorant of the actual halachah, but had some vague of how the relevant halachot worked and tried to construct his own rational for them. A glow stick does not produce significant quantities of heat, so a glowstick is probably at most Rabbinically prohibited. This means that arguably the consequences of what one does after having such an object activated may be less severe than if it were say a candle. In particular, one is in general prohibited on the Sabbath from benefiting from actions which one knew were violations of the Sabbath, but that might not apply since a glowstick is at most Rabbinically prohibited, and likely not even then. However, a candle is classified as "muktzah", a complicated status that roughly means that even if it had been lit before Sabbath, it cannot be moved, and arguably that would apply to a glowstick. However, once one has picked up a muktzah object (whether on purpose or by accident), one may generally move it around until one puts it down.

Replies from: shokwave, komponisto
comment by shokwave · 2011-01-15T01:27:32.604Z · LW(p) · GW(p)

I don't think I can even begin to comprehend the kind of bizarre law-fetishism that could lead to this runaway ridiculous situation - where the answer to "can I move this candle" is "it's complicated".

Replies from: JoshuaZ, TheOtherDave, Costanza, None, Costanza
comment by JoshuaZ · 2011-01-16T15:22:51.123Z · LW(p) · GW(p)

There are a lot of comments already in this subthread addressing these issues already but I'm going to just comment on one other issue that's worth bringing up. There's a common belief among Orthodox Jews that the rule system reflects reality at some level. This is most common among certain chassidic groups, especially the Lubavitch, who believe that doing mitzvot (commandments from God) actively make the world a better place (less disease, fewer natural disasters etc.) and that doing bad things has the opposite effect. In the context of that belief, understanding the exact boundaries of the laws is similar to understanding the exact boundaries of the laws of physics. Whether a given mass of enriched uranium will go critical is complicated, a function of the exact shape, the U-235/-238 ratio, the presence and types of trace impurities and other factors. We don't mind that because we all see the results. To some believers, whether religious Jews, or other highly legalistic religions such as some branches of Catholicism, this feels very similar. Caring about the minutia is an example of really acting like there's a dragon in the garage.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-01-16T16:00:13.732Z · LW(p) · GW(p)

This was very much not the case within the Orthodox tradition I was raised in.

Something similar was true for mishpatim (1), I guess -- in the same way that secular communities frequently assume that their preferred policies make the world a better place -- but chukim (1) were presented entirely deontologically.

Sure, one could make an argument to the effect that God was omniscient and benevolent, and wanted these rules followed, and therefore it was likely that the effect of following the rules would be beneficial... but mostly nobody did; the more common stance was that obedience to God was the proper terminal value, and God wanted these rules followed, and therefore compliance with the rules was a proper instrumental value. Likely consequences didn't enter into it at all.

(1) Jewish tradition divides the commandments derived from the Old Testament, which are by tradition understood as coming directly from God (as distinct from the ones that are understood as coming from later rabbinical bodies), into two classes, chukim and mishpatim. Roughly speaking, mishpatim have a reason given and chukim don't.

comment by TheOtherDave · 2011-01-15T05:22:13.281Z · LW(p) · GW(p)

(shrug) I think what I said here applies. Try explaining the social rules governing successfully navigating your own community to a complete outsider -- or even better, someone with Asperger's -- and you may find it easier to comprehend how one gets into such a ridiculous situation.

comment by Costanza · 2011-01-15T02:20:23.681Z · LW(p) · GW(p)

I'm a gentile atheist and I find that Halachich debate and reasoning totally appeals to the detail-oriented fanboy in me. Ultimately, it was a barren intellectual exercise with respect to the real world, but a hugely challenging intellectual exercise -- a game -- nonetheless. Maybe one of the great games of history. Who can tell how many brilliant minds wasted their lives building this enormously refined system of law, based on the myths of one of many, many barbaric tribes? But with that said, in the modern day, many of the best jurists and legal scholars today are Jews who owe some debt to this cultural inheritance.

Replies from: shokwave, None, JoshuaZ, None
comment by shokwave · 2011-01-15T02:51:39.669Z · LW(p) · GW(p)

many of the best jurists and legal scholars today are Jews who owe some debt to this cultural inheritance.

This says some good things about the cultural laws, but it also says some bad things about our legal systems.

Replies from: Costanza
comment by Costanza · 2011-01-15T03:24:18.402Z · LW(p) · GW(p)

I think this must apply to every legal system which has governed humans so far. If laws are to be made known to everyone, and generally comprehensible, then they can't be too complicated. As it is, they tend to be plenty complicated. Even so, great numbers of people in aggregate are still far, far more complicated than any human system of laws. They will do things unanticipated by the lawmakers, and not exactly covered by the words of the lawmakers. Then, a court of law may be required to decide whether or how an inherently ambiguous law applies to an unanticipated fact pattern.

Replies from: shokwave
comment by shokwave · 2011-01-15T03:33:27.691Z · LW(p) · GW(p)

I think this must apply to every legal system which has governed humans so far.

I agree, and it's factually true; my concern was that if training on Halachic law was good practice for common law, then our legal systems suffer too much from complications. I think the Halachic system is bad, and to the extent that our legal system resembles it enough to measurably advantage Halachic scholars, our legal system is bad too.

There was a move at one point to write laws in Python or some other programming code; I would then argue that if thinking like a programmer made you a better jurist or legal scholar, it says good things about both systems.

Replies from: Costanza, ShardPhoenix
comment by Costanza · 2011-01-15T03:47:15.043Z · LW(p) · GW(p)

I am seriously interested in more information about this approach. I think that right now, there are two modern systems of law: Roman-derived law and English-derived, or "common" law. Sharia law might count as a close runner-up. I think Halacha is well-developed, but not widely-enforced, so I would not count it as a major modern legal system. With that said, and admitting I don't know much about civil law or the religious laws, my impression is that all the above are similarly complicated, and have been for centuries. I am in doubt that human behavior and its ambiguities could be simplified by being encoded in Python. I think it's a really, really hard problem, at least as long as humans remain as unpredictable as they do.

Replies from: Will_Sawin, bogus
comment by Will_Sawin · 2011-01-15T15:06:26.435Z · LW(p) · GW(p)

Off-topic: Why does everyone on lesswrong say Python when they need to mention a programming language?

Replies from: TheOtherDave, shokwave, timtyler, None
comment by TheOtherDave · 2011-01-15T16:33:13.107Z · LW(p) · GW(p)

Rule 46b:: I will not turn my programming language into a snake. It never helps.

comment by shokwave · 2011-01-15T16:29:39.726Z · LW(p) · GW(p)

It has a very high ease of learning to usefulness ratio?

edit: It seems to come highly recommended as a first programming language (certainly it was such to me).

Replies from: scav, Normal_Anomaly, Risto_Saarelma
comment by scav · 2011-01-18T15:52:35.414Z · LW(p) · GW(p)

Do you mean a high usefulness to difficulty of learning ratio?

Atari BASIC had a nearly infinite ease of learning to usefulness ratio. :)

Replies from: shokwave
comment by shokwave · 2011-01-18T16:54:48.909Z · LW(p) · GW(p)

Right.

comment by Normal_Anomaly · 2011-01-15T19:21:56.545Z · LW(p) · GW(p)

Python is my first (and currently only) programming language. It's easy to read, easy to learn, and useful.

comment by Risto_Saarelma · 2011-01-15T18:39:06.243Z · LW(p) · GW(p)

Python code is also reasonably easy to read. It's sometimes called executable pseudocode.

comment by timtyler · 2011-01-15T15:45:21.204Z · LW(p) · GW(p)

I did a Google duel - and it appears that "Java" beats "Python" for mentions around here.

comment by [deleted] · 2011-01-16T06:16:16.556Z · LW(p) · GW(p)

I don't get it either I'm more of a C guy.

comment by bogus · 2011-01-16T14:34:38.739Z · LW(p) · GW(p)

I am seriously interested in more information about this approach. I think that right now, there are two modern systems of law: Roman-derived law and English-derived, or "common" law. Sharia law might count as a close runner-up. I think Halacha is well-developed, but not widely-enforced, so I would not count it as a major modern legal system.

David Friedman has taught a course in "Legal Systems Very Different From Ours" in both 2008 and 2010. See these course pages: [1] [2]

comment by ShardPhoenix · 2011-01-15T12:06:28.650Z · LW(p) · GW(p)

I think the Python thing was just for the payoff functions of securities, not for laws as such.

Replies from: shokwave
comment by shokwave · 2011-01-15T16:36:50.948Z · LW(p) · GW(p)

That is disappointing. Lawmakers who think like programmers seems like it would be a huge improvement on the current system.

Replies from: nerzhin, Sniffnoy
comment by nerzhin · 2011-01-15T20:13:39.183Z · LW(p) · GW(p)

Lawmakers who think like programmers might be an improvement. But I'm not sure.

On Less Wrong, this almost reads as "if only lawmakers were more like me, things would be okay." I'm skeptical.

comment by Sniffnoy · 2011-01-15T22:34:25.565Z · LW(p) · GW(p)

It would probably have to be coupled, though, with a state where laws are actually enforced consistently, and can be changed quickly if they end up screwing things up massively.

comment by [deleted] · 2011-01-16T06:11:23.202Z · LW(p) · GW(p)

Who can tell how many brilliant minds wasted their lives building this enormously refined system of law, based on the myths of one of many, many barbaric tribes?

They may have wasted their minds on it, but the better they where at wasting their minds the higher their status was, the likelier it was they would marry a girl from another respected or wealthy family and consequently the more they got to reproduce.

Where their minds truly wasted? Or did it by happy accident, a hack of our out of date reward systems, managed to produce more brilliant, if deluded and blinded minds? History has also since shown that the minds aren't irreversibly deluded.

I can't help but wonder if we would have had quite as many wonderful minds like Bohr, Einstein, Hertz or Nobele prize winners like Richard Phillips Fenyman or Isidor Isaac Rabi (!) if those minds in the late middle ages or early modern period weren't wasted.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-01-16T15:29:04.720Z · LW(p) · GW(p)

Possibly, but at the same time, a lot of those people in the Middle Ages were still wasting time and are still doing so today. There's no question for example that Maimonides was brilliant. He was impressive for his accomplishments in philosophy, medicine, and even in other areas that he only dabbled in (such as math). That he spent most of his time on halachah certainly held back society. And he's not the only example. Similar remarks would apply to many of the great Rabbis in history and even some of the modern ones.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-01-16T15:44:13.198Z · LW(p) · GW(p)

I'd be interested in seeing how you draw the line between Maimonides' work in halachah and in philosophy. I can certainly identify outputs that I would classify as one or the other, but I would have a very hard time drawing a sharp line between the processes.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-01-16T16:24:39.210Z · LW(p) · GW(p)

I agree that there isn't a sharp line. But if we just look at the material that falls unambiguously into halachah as opposed to all the material that falls into philosophy or the borderline, there's a lot more halachich material.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-01-16T16:33:36.020Z · LW(p) · GW(p)

Sure. Again, classifying the outputs isn't too hard. Philosophical and halachic writing are different genres, and it's relatively easy to class writing by genre. Sure, there's a fuzzy middle ground, but I agree that that's a minor concern.

But your argument seems to depend on the idea that if he spent a year thinking about stuff and at the end of that year wrote five thousand words we would class as halachah and five hundred words we would class as philosophy, that means he wasted that year, whereas if it had been the other way around, that would advance society.

Before endorsing such an argument, I'd want to know more about what was actually going on in that year. I could easily see it going either way, simply because there isn't a clear correlation in this context between how useful his thinking was vs. what genre he published the results in.

comment by JoshuaZ · 2011-01-16T15:35:11.213Z · LW(p) · GW(p)

Regarding fanboyism, that's certainly an aspect that it has similarity to among the more self-conscious Orthodox Jews there's a feeling that they understand it as an intellectual game. And for what it is worth, when I've wrote on my blog entries about things likes the halachot of making a horcrux, or the kashrut status of a Star Trek replicator, most Orthodox readers are interested and generally not offended.

Replies from: Alicorn
comment by Alicorn · 2011-01-16T15:40:39.375Z · LW(p) · GW(p)

the kashrut status of a Star Trek replicator

I want a link to that.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-01-16T16:10:28.589Z · LW(p) · GW(p)

Discussed in this entry.

comment by [deleted] · 2011-01-16T06:18:34.086Z · LW(p) · GW(p)

But with that said, in the modern day, many of the best jurists and legal scholars today are Jews who owe some debt to this cultural inheritance.

I think Lawyers are like Warriors in this regard.

comment by [deleted] · 2011-01-16T05:43:24.452Z · LW(p) · GW(p)

What you call law fetishism I call ensuring high fidelity in meme transmission while at the same time trying to make irrational memes that are adaptive easier to rationalise.

The whole collection of memes that lead to this "ridiculous situation" was fitness enhancing, even from a genetic perspective, but people who transmitted it didn't know why it worked (note: We probably still don't appreciate how culture modifies behaviours and expectations in unknown ways).

It was improved mostly via groups that where unfortunate enough to stumble on a variation that didn't work dying off or being out-shined. The Shakers are a example of a Christian group stumbling upon a variation (or mutation) of Christianity that unfortunately doesn't work, However over time people found it harder and harder to execute all these seemingly random instructions (junk memes build up over time), that is where scholarship comes in.

Sometimes you can streamline the rules, at the price of adding a few more junk memes, other times you can create convoluted rationalizations that most never study but helps those in positions of authority being more or less sincere in promoting the rules.

Abrahamic faiths are basically elaborate collections of scripts that when executed in the "ancestral" environment helped the entire list of memes come down to the present. They like I previously stated often even helped general genetic fitness (just look at the increase in numbers of Jews in Eastern Europe from the 16th century onwards, or the increase in Amish numbers since the early 20th century) They could employ a much longer list of behaviours than the limited lists of taboos and rituals of older traditions because of writing and boon that is the idea of a omnipotent omniscient agent which basically acts as universal solvent for the feeling of cognitive dissonance.

Religious scholars and leaders where in times when they where greatly respected basically social engineers who tried to tweak the DNA of their society to keep it competitive or to enhance their own benefit from the super-organism.

comment by Costanza · 2011-01-15T02:38:14.080Z · LW(p) · GW(p)

the answer to "can I move this candle" is "it's complicated".

I think there's a principle at work here. I suspect that this has been expressed more formally. But...

... laws proposed to govern human behavior -- which is complicated -- can only anticipate a portion of that behavior. Lawmakers may enact the most rigid, black-and-white, unambiguous law you could imagine, but it must be expressed in words more ambiguous than the words used in mathematics. There will be a grey area, and human action will find that grey area. It will be complicated on the fringes.

This applies to any system of law by which humans are to govern themselves, Halacha just as much as the United States Code of Federal Regulations.

Replies from: shokwave
comment by shokwave · 2011-01-15T02:53:13.000Z · LW(p) · GW(p)

That sounds related to Goodhart's Law.

There will be a grey area, and human action will find that grey area. It will be complicated on the fringes.

Could reasonably be called "Costanza's Corollary to Goodhart's Law" .

Replies from: Costanza
comment by Costanza · 2011-01-15T03:16:27.861Z · LW(p) · GW(p)

I see Wikipedia says Goodhart's Law may mean: "that once a social or economic indicator or other surrogate measure is made a target for the purpose of conducting social or economic policy, then it will lose the information content that would qualify it to play such a role."

I tentatively think prescriptive laws do not correspond to measures, whether surrogtate or direct. Right now, I think surrogate measures are like maps, which may or may not match the territory. On the other hand, I think laws are not like maps. Rather, they are like plans, especially like plans made for exploring territory that has not yet been mapped. Every so often, explorers must revisit their plans in the face of reality. My metaphor may be breaking down a bit here, but imagine law as a single set of instructions issued by the King to innumerable groups of hopeful colonists, setting out to explore a new world. The instructions may suffice for many or most, but some will have to make some creative interpretation.

comment by komponisto · 2011-01-15T02:10:26.083Z · LW(p) · GW(p)

Out of curiosity, do people who grow up under this sort of regime end up thinking it's normal, similarly to the way people raised in Christianity end up desensitized to the absurd-sounding nature of the beliefs about virgin birth and so on? Does it cause them to e.g. be more accepting of government regulation than average? Or is there some kind of compartmentalization going on where they continue expecting rules in general to make some sort of sense (and not interfere with practical functioning), just not those labeled "religious"?

My suspicion, of course, is the latter (just as people compartmentalize their epistemic beliefs, and allow their absurdity heuristic to function more-or-less normally outside of the religious domain), but I'd be curious to hear reflections from those who were raised in strict legalistic religions about the extent to which such practices actually struck them as absurd inside their own minds (even allowing for belief in the empirical claims of the religion about the nature of the universe).

Replies from: TheOtherDave
comment by TheOtherDave · 2011-01-15T05:19:25.423Z · LW(p) · GW(p)

I can't speak for anyone else, but I was raised an Orthodox Jew and I basically took to treating it as "normal" in the same sense that any set of arbitrary social rules is "normal." It was no weirder than the rules governing, say, when it was OK to wear a T-shirt and sneakers vs. when it wasn't, or when it was OK to eat the last piece of cake, or whatever.

And I still basically think that. It's not that there's some default state where there aren't any arbitrary rules to follow, against which I can compare the rules of Orthodox Judaism. There are just different cultures, each with its own set of rules.

I suspect that, again as with any set of social norms, the key distinction is between people who are raised with only one such set of norms, compared to people who are raised having to navigate among several. The former group can treat their culture's rules as invisible and default and "common sensical"; the latter group can't get away with that so easily.

comment by MichaelVassar · 2011-01-15T18:48:32.513Z · LW(p) · GW(p)

Anyone interested in pointing Less Wrong out to Josh Steiber, from the linked slate article? I'll contact the author.

comment by cousin_it · 2011-01-14T10:17:34.695Z · LW(p) · GW(p)

This paragraph:

What is your reference class for predicting your own behavior?

and this one:

I think it probably does help, though, to be a bit of a drama queen.

crossed the line from good to awesome for me. Thanks for the post!

comment by komponisto · 2011-01-14T22:39:39.252Z · LW(p) · GW(p)

There is a third human bias that causes you to tell yourself that you have successfully changed your mind when you have not really done so. The adherent of the Reformed Church of Dragon leaves the garage door open, and cheerfully admits to anyone who asks that there is probably no such thing as an invisible dragon, yet she is unaccountably cautious about actually parking her car in the garage. Thus it is worth knowing not just how to change your mind, but how to change your habits in response to new information.

Related: The Mystery of the Haunted Rationalist.

comment by Alicorn · 2011-01-14T13:14:24.461Z · LW(p) · GW(p)

What if anything, would convince you to stop (or start) eating animals? Not merely to admit, verbally, that it is an acceptable thing for others to do, or even the moral or prudent thing for you to do, but to actually start trying to do it?

In my case: adequate alternatives. I tried to become a vegetarian once before I succeeded. However, this was before the day I spontaneously woke up one morning with a taste for vegetables (it happened, it was weird), so I ate grilled cheese every day for a few days and then gave up. Later, when I a) liked vegetables and b) had access to adequate kitchen facilities so I could learn to cook, the transition was close to effortless. I could always arrange to have something around that I was enthusiastic about eating that wasn't meat.

What, if anything, would convince you to stop (or start) expecting monogamy in your romantic relationships?

I don't trust myself to be well-calibrated about this. People who felt strongly as I do when they were my age have undergone significant changes. However, I've discussed this some.

To save (or borrow) significant amounts of money?

I have a dreadful aversion to debt I'm not sure I can pay back, so I guess I'd have to find not doing whatever would be done with the loan more aversive than carrying the debt, and expect to be able to pay back (or consider my enterprise important enough to be willing to expect not to pay back) the amount. So far I've never seriously cared about accomplishing anything that I could only accomplish with a large sum of borrowed money. Savings-wise, I already save all my income by default and only spend it after careful consideration.

To drop one hobby and pick up another? To move across the country?

I did both in the past year. I have been carefully cultivating the habit of a) noting when I do something "for fun", and b) continually evaluating whether those things are still fun. Things that I do "for fun" that are no longer fun, I need to repair or drop. The extreme case for me was acknowledging that I'd gone to grad school because I thought it would be fun, and then it wasn't anymore. So I dropped out and went to work at SIAI for a while, incidentally across the country. I'm pretty good at determining whether things are still fun by introspecting; remembering to do it is the problem.

comment by Desrtopa · 2011-01-17T18:04:13.575Z · LW(p) · GW(p)

If you gauge the dosage correctly, the propaganda might nudge your opinion just enough to make you actually adopt the new action that you felt would adequately reflect your new beliefs, but not enough to drive you over the cliff into madness.

This sounds difficult enough to do reliably that I have to question whether it's actually a good tactic.

One thing I think may be helpful, that I've noticed some people here seem to practice; if someone says something to you which makes you think about revising your opinion, tell them so. You'll have forced yourself to take greater notice of your uncertainty, and have a social pressure to have something to show for your consideration after thinking about the matter further.

comment by prase · 2011-01-14T12:35:53.246Z · LW(p) · GW(p)

One of the best articles here lately. The first two advices are very good, even if probably not new, but you have formulated the point very persuasively. I would also not worry about the political example: in spite of the mind-killing abilities of politics, the way how you have stated your examples is unlikely to incite a flame war in this community (if it does, I will be afraid that our level of rationality is not much higher than that of average folk, despite our aspirations).

I have a little problem with the third advice, though. I suspect it would not work for many people. As a defense mechanism against dark arts, I have built an ability to relatively easily spot logical fallacies in arguments, and it happens that reading propaganda decreases my sympathies to the cause which the propaganda is promoting.

(I was a fairly strong leftist few years ago - now I am much closer to the centrist views, partly due to (at least I think so) reading a lot of propagandist crap on one left-wing server. I have decreased the frequency of reading that not only because I have now much lower expectation of finding something reasonable there, but also because I fear changing my political adherence for an irrational reason (stupidity reversal). Still, one of the reasons why I have not moved further to the right is occasional encounter with equally stupid right-wing propaganda. And existence of Randroids, of course.)

Replies from: Will_Sawin
comment by Will_Sawin · 2011-01-14T16:10:57.319Z · LW(p) · GW(p)

Is having correct political beliefs important to you? Because it seems like you have a serious deficiency there that, since you are aware of, you may be able to correct. For instance, exposing yourself to lots of high-quality arguments from both sides might help.

But we have no theory of correct political beliefs, so you might be kind of helpless here.

Replies from: prase, nazgulnarsil
comment by prase · 2011-01-14T16:23:26.120Z · LW(p) · GW(p)

They used to be more important that they are now.

I didn't intend to imply that I have avoided good arguments in favour of poor propaganda. I think I have heard most of the good arguments too (and the stupidity of the poor arguments is more apparent when compared to the good ones). I have only described the effect which propaganda has on me. This effect is irrational, since it activates the stupidity reversing reflex, so I try to avoid it; I wanted to point out that using propaganda as a mind hack may work differently for different people.

comment by nazgulnarsil · 2011-01-15T11:19:03.732Z · LW(p) · GW(p)

I'd start with coherence if you're looking for correct beliefs of any sort. centrism certainly doesn't meet this guideline.

Replies from: prase
comment by prase · 2011-01-15T12:52:45.092Z · LW(p) · GW(p)

It may certainly depend on what do you exactly mean by centrism, but can you be more explicit in your statement about its lack of coherence?

Also, I am more likely to form my political beliefs based on my actual values rather than on elegant philosophical principles. Human values are complicated and unlikely to be expressible by a succinct coherent belief system.

Replies from: nazgulnarsil
comment by nazgulnarsil · 2011-01-15T14:38:09.179Z · LW(p) · GW(p)

is the centrist position today the same as the centrist position of 10 years ago? what about 100? what about the centrist position in germany in 1942? taking the average of two wrong positions is unlikely to produce a correct one.

and you admit that your values are incoherent so readily? that is unusual but highly beneficial as a starting point.

Replies from: prase, Will_Sawin
comment by prase · 2011-01-15T19:28:49.026Z · LW(p) · GW(p)

I have written that I am now closer to the centrist (as the word is defined now and in my country) views than I have been few years ago, when I was sympathetic with a bit more radical leftist (once again, as defined now and in my country) opinions. I have not included the clarifications in the parentheses because I did find that interpretation obvious. Since your replies imply that my words can be interpreted differently from what I have meant, I should have been perhaps more clear. So, I do not say that I average the extreme positions and that I am close to the centrist position just because it appears to lie in the centre, and thus I will shift my opinions when the centre moves.

and you admit that your values are incoherent so readily? that is unusual but highly beneficial as a starting point.

That they are incoherent or inconsistent doesn't mean that they are so in an obvious manner. Values are complicated and not all conflicts are easy to see, and even after being seen, they are not easy to resolve. Think about the trolley problem for example.

Edit: just to be more clear, I have to add that the (approximately) centrist position I hold means sharing some opinions which are more common on the right and others which are prevalent on the left, not being close to average on each opinion separately.

comment by Will_Sawin · 2011-01-15T15:10:12.493Z · LW(p) · GW(p)

No, he said that they're probably either incoherent or not succinct.

There should exist coherent positions that are roughly in the center of the two parties/idealogies. One can argue that libertarianism is, for instance.

Replies from: nazgulnarsil
comment by nazgulnarsil · 2011-01-15T15:12:47.489Z · LW(p) · GW(p)

directional metaphors may fail if looked at with reasonable rigor.

Replies from: prase
comment by prase · 2011-01-15T19:46:36.729Z · LW(p) · GW(p)

Any statement may appear incoherent if looked at with unreasonable rigor.

comment by [deleted] · 2011-01-16T06:30:22.290Z · LW(p) · GW(p)

The same troops in the same town confronted with the same evidence that their presence was unwelcome all continued to blame and kill the locals.

Generally when occupying another country or supporting its government with your troops you only care what the frak the locals think in a very limited, well defined and may I say small sense.

If you have decided that all things considered you want your troops in a location that generally takes into account them not wanting you there. The most one can say to local opposition is "noted".

The theory is that American military presence in Iraq is good for Iraqis because it helps them build democracy, or security, or their economy, or some combination. It's moderately challenging to concede that the theory could be flawed.

Sure some selective application of violence might actually benefit the other, but that is not, nor is it ever, the real reason why anyone does it. This is especially true when one needs to invest non-trivial amounts of resources of effort.

Replies from: katydee
comment by katydee · 2011-01-16T07:34:47.103Z · LW(p) · GW(p)

This post basically boils down to "MOST PEOPLE ARE STUPID AND EASILY TRICKED, BUT I'M NOT." Probably true, but do you have to be so overt about it?

Replies from: None
comment by [deleted] · 2011-01-16T08:53:11.615Z · LW(p) · GW(p)

Upvoted for bringing my attention to this. I didn't feel so, but reading my response to the second comment I see how one can get that impression. I've edited that bit of the text while trying to keep its original meaning.

Does it come of any better?

Part of the reason I perhaps came off the wrong way might have been that I was mistaken thinking that not many people are geniuenly fooled by the rational and are aware of a ulterior motive that makes it awfully convenient to "help" that particular group of people, unless they also seek to ensure universal adherence to their values.

In which case I also thought was obvious to most that when they cheer for "spreading democracy" or things like that what is happening on a basic level is satisfaction of the urge to convert the infidels not a rational judgement based on a unbiased consideration of what is best for them.

If it was the first part of my post that bothered you, perhaps I should emphasise that I don't object to not caring what the locals think to a extent, I just object to not being honest to oneself about it. I also implicitly stated (small) that by the standards we apply to some other situations concerning government and violence, a occupying force cares a little bit less than one might first assume.

Replies from: katydee, zyxwvutsr
comment by katydee · 2011-01-18T01:39:23.710Z · LW(p) · GW(p)

The new version does indeed seem better, though the second part of the post seems less clear and perhaps overly general now-- I'm extremely confident that violence is applied in at least some cases primarily to help others.

comment by zyxwvutsr · 2011-01-16T14:43:02.416Z · LW(p) · GW(p)

"...a occupying force cares a little bit less than one might first assume"

I don't mean to be overly critical of your imprecise language, but in this context I think it is important to note that a "force" does not care at all. More to the point, a military force comprises individuals who hold a whole range of opinions and who may act in ways that are contrary to those opinions.

comment by Vladimir_Nesov · 2011-01-14T21:28:42.258Z · LW(p) · GW(p)

rationalism

This triggered me posting this article, where I write:

I feel that the term "rationalism", as opposed to "rationality", or "study of rationality", has undesirable connotations.

(Discuss there, not here.)

comment by Luke Stebbing (LukeStebbing) · 2011-01-19T01:30:19.652Z · LW(p) · GW(p)

Off-topic: Meatless (and pattyless) sandwiches are surprisingly good if you load them up with most of the vegetables. I go to Subway a few times a month but haven't had a meat sub there in years.

comment by Psychohistorian · 2011-01-18T17:31:56.379Z · LW(p) · GW(p)

I think the examples used here are absolutely terrible, and I think they indicate a fundamental flaw underlying this theory. Basically, what you call "irrational" in this context, I'd call "rational but dishonest about its motives."

The purpose of having US troops in an area is not to make the locals happier. I don't see much of a reason military leadership should care about local opinion except insofar as it advances their actual objectives. This is true in both the sense that a mugger shouldn't care about his target's feelings, and a parent shouldn't care exclusively about a child's opinions. Which is the more apt analogy is immaterial; the argument here does not provide evidence that there is any major realization to make.

Similarly, as I realized talking to a US ambassador, food aid has relatively little to do with helping people. US food aid is basically the American government giving money to American food producers. Incidentally, food is shipped to third-world countries, where it may have a positive or negative effect. Yes, there may be other ways for agricultural producers to get money - but they pursue those. There isn't much of a better way to get this particular money - unless you've got some articulable theory how ending such subsidies would help the agricultural industry. Remember, it also allows them to nip future competition in the bud by providing zero-cost products. This isn't a case of failed rationality; it's a case of perfectly functional rationality with a sinister motive.

I realize there is some risk of this being a perfectly general counterargument - there's always some function for which a given action is rational - but its application here is precise. There are clear and obvious motives that are different from those analyzed, and those motives are being pursued relatively efficiently.

For the military, you assume that the purpose of troops is significantly related to doing what the locals want. If that were the case, there wouldn't be too much sense in deploying troops in a foreign nation. Moreover, this example assumes that military action against US troops is the will of the general populace, without actually substantiating that theory. Basically, this isn't an example of being mistaken. The response being used may be ineffective (and if that's the limit of your point, I agree - it's just rather confused by the surrounding statements), but the idea that the rational response somehow involves leaving - which I feel is heavily implied - is, from a game-theoretic perspective moronic.

comment by MichaelHoward · 2011-01-18T13:02:15.073Z · LW(p) · GW(p)

Your "sunk costs" link is broken. You maybe want to link here, here, or if you're feeling evil, here.

Replies from: apophenia
comment by apophenia · 2011-03-06T00:50:40.986Z · LW(p) · GW(p)

It cost me three willpower points or so not to click the third.

comment by utilitymonster · 2011-01-16T15:50:10.480Z · LW(p) · GW(p)

On the symbolic action point, you can try making the symbolic action into a public commitment. Research suggests this will increase the strength of the effect you're talking about. Of course, this could also make you overcommit, so this strategy should be used carefully.

comment by zyxwvutsr · 2011-01-16T14:28:55.083Z · LW(p) · GW(p)

"Out of curiosity, do people who grow up under this sort of regime end up thinking it's normal, similarly to the way people raised in Christianity end up desensitized to the absurd-sounding nature of the beliefs about virgin birth and so on? Does it cause them to e.g. be more accepting of government regulation than average?"

Why not look at relatively more secular western Europe versus the relatively more religious US and see which population is more accepting of government regulations. That is to say that either you have it precisely backwards or there is no observable correlation.

Replies from: wedrifid
comment by wedrifid · 2011-01-16T14:51:52.622Z · LW(p) · GW(p)

Why not look at relatively more secular western Europe versus the relatively more religious US and see which population is more accepting of government regulations. That is to say that either you have it precisely backwards or there is no observable correlation.

That does not actually follow.

Replies from: zyxwvutsr
comment by zyxwvutsr · 2011-01-16T15:24:23.630Z · LW(p) · GW(p)

Why not?

Replies from: shokwave
comment by shokwave · 2011-01-17T04:27:19.740Z · LW(p) · GW(p)

There are three possibilities: Mass_Driver has the causal flow right, Mass_Driver has the causal flow wrong, there is no causal flow. Pointing out the existence of the last two options doesn't mean the only two options are those options. It is still entirely possible after your comment that Mass_Driver has it the right way around. Therefore, it doesn't follow.

comment by [deleted] · 2011-01-16T06:44:42.954Z · LW(p) · GW(p)

3) Over-correct your opinion by reading propaganda

Your mileage may vary; please use this tactic carefully.

I already have problems that could easily be made worse by this despite your warning. I have a hunch many on LW do.

comment by [deleted] · 2011-01-16T05:33:06.865Z · LW(p) · GW(p)
  1. There's significant ambiguity about what counts as "changing" a belief. If you look at belief in the only way that's rational—that is, as coming in degrees—then you "change" your belief whenever you alter subjective probability. Your examples suggest that you're defining belief change as binary. I think people's subjective probabilities change all the time, but you rarely see a complete flip-flop, for good reason: significant beliefs often rest on vast evidence, which one new piece of evidence, no matter how striking, won't be apt to reverse the direction of (in binary terms).

  2. From this Bayesian standpoint, the over-correction advice (e.g., reveling in propaganda) are misguided, because overshooting isn't harmless; you can make yourself epistemically worse off (although this won't be evident in a binary model). For example, if you start with a likelihood estimate of .4 and obtain evidence that should move you to .6, but instead you over-correct and end up at .9, you end up more wrong than before, although this territory is concealed when you use a binary map.

  3. You're discussing, I think, mainly far-mode beliefs. Near-mode beliefs change all the time. (Any time you try to solve an equation and try one solution and abandon it for another, you've changed your belief.) Far-mode beliefs resist change because far mode evolved as a means of deceptive signaling. (H/T Robin Hanson) As such they are less responsive to evidence and more responsive to status concerns, e.g., appearing consistent.

The only way to change a far-mode tendency is on its own terms, by changing what you are committed to signaling. Rather than signaling you are a consistent person who's always right the first time, you turn yourself into a person who's inclined to signal that he's an open-minded person, responsive to the evidence and capable of learning from mistakes.

Replies from: shokwave
comment by shokwave · 2011-01-17T04:35:38.387Z · LW(p) · GW(p)

From this Bayesian standpoint, the over-correction advice is misguided, because overshooting isn't harmless; you can make yourself epistemically worse off.

If you update on the evidence from .3 to .5, and then later evidence shows you that you still act as if you believe the probability is .3, then you should consider irrationally changing your belief. Of course, you risk overupdating to 0.7 or 0.9, but that is a question for expected utility, not a point against the concept entire.

It may be possible to be more accurate with over-correction than simply pushing in the right direction; inundate yourself with "0.5" propaganda or something.

comment by Jordan · 2011-01-15T21:26:25.858Z · LW(p) · GW(p)

1) Specify a quitting point in advance.

Along this same line I try and always keep my beliefs and actions under the banner of a more general ideal or goal. For instance, if I wanted to help decrease existential risk and decided that the best way was to move to San Francisco to be closer to SIAI, then instead of simply caching the goal 'Move to SF' in my mind, I would try and cache 'Reduce existential risk by moving to SF'.

This takes extra memory, but it serves to remind you to question the validity of your subgoals in the context of your supergoals. I also feel more motivated to work on a project/goal when I can quickly trace the justification for working to my supergoals.

(Great post btw!)

comment by Wilka · 2011-01-15T14:35:53.835Z · LW(p) · GW(p)

Over-correct your opinion by reading propaganda

You could also try creating your own propaganda (also useful for Akrasia). You should have a good idea of the types of things that motivate you, so you can use that knowledge to make very focused adverts (e.g. basic posters) for yourself.

There's more on this kind of thing, advertising to yourself, over at http://www.takebackyourbrain.com/ - but it looks like it hasn't been updated in a while.

comment by atucker · 2011-01-15T14:34:54.143Z · LW(p) · GW(p)

4) Admit that you're wrong to other people, whether its publicly or to close friends who are in a position to catch you not having updated your behavior. This adds social pressure to continue the change, and more people to notice when you mess up. (This could go under one or two, though.)

comment by [deleted] · 2011-01-14T20:28:02.292Z · LW(p) · GW(p)

the Reformed Church of Dragon

Hilarious. I nearly choked on my sandwich.

comment by DSimon · 2011-01-14T15:15:00.682Z · LW(p) · GW(p)

I don't have much of substance to add, but I want to say: this is an excellent post, and I think it deserves front page status.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2011-01-14T15:35:35.177Z · LW(p) · GW(p)

Which details did you find excellent or helpful? What might you do differently, now that you've read the post?

In terms of adding substance, it generally helps my own reading to know pieces of content others are stealing, since often those help me, too. (Though there's nothing wrong with just saying what you said.)

Replies from: DSimon
comment by DSimon · 2011-01-14T21:40:01.570Z · LW(p) · GW(p)

I enjoyed the specific examples; I was a little wary after the Watch Out Politics Ahead disclaimer, but the actual examples chosen were presented in a way that made their applicability to the topic obvious, and reduced their potential impact as mind-killers. (However, take this evaluation with a grain of salt: most of the examples reflected my current political positions, so they might not seem as ideal to another.)

I also liked that the proposed solutions took human biases into account, with suggestions that go beyond just identifying a common error. The first solution puts forward a specific suggestion for working around the given bias. The others propose doing some bias jujitsu, and putting the akrasic parts of our minds to work for us. This can be easy to overdo, but despite that I think it's a very useful technique, especially for newcomers who may not be as used to trying to pick apart their own thought processes.

On that note, I really want to see this article on the front page because I think the topic overall would be of particular interest to newcomers. It requires no prerequisites (though it also links liberally to related sequence and non-sequence posts) or unusual terminology, and provides concrete near-mode problems and solutions.