Self-Congratulatory Rationalism

post by ChrisHallquist · 2014-03-01T08:52:13.172Z · LW · GW · Legacy · 395 comments

Contents

  What Disagreement Signifies
  Intelligence and Rationality
    Edited to add: in the original post, I intended but forgot to emphasize that I think the correlation between IQ and rationality is weak at best. Do people disagree? Does anyone want to go out on a limb and say, "They aren't the same thing, but the correlation is still very strong?"
  The Principle of Charity
  Beware Weirdness for Weirdness' Sake
  A More Humble Rationalism?
None
395 comments

Quite a few people complain about the atheist/skeptic/rationalist communities being self-congratulatory. I used to dismiss this as a sign of people's unwillingness to admit that rejecting religion, or astrology, or whatever, was any more rational than accepting those things. Lately, though, I've started to worry.

Frankly, there seem to be a lot of people in the LessWrong community who imagine themselves to be, not just more rational than average, but paragons of rationality who other people should accept as such. I've encountered people talking as if it's ridiculous to suggest they might sometimes respond badly to being told the truth about certain subjects. I've encountered people asserting the rational superiority of themselves and others in the community for flimsy reasons, or no reason at all.

Yet the readiness of members of the LessWrong community to disagree with and criticize each other suggests we don't actually think all that highly of each other's rationality. The fact that members of the LessWrong community tend to be smart is no guarantee that they will be rational. And we have much reason to fear "rationality" degenerating into signaling games.

What Disagreement Signifies

Let's start by talking about disagreement. There's been a lot of discussion of disagreement on LessWrong, and in particular of Aumann's agreement theorem, often glossed as something like "two rationalists can't agree to disagree." (Or perhaps that we can't foresee to disagree.) Discussion of disagreement, however, tends to focus on what to do about it. I'd rather take a step back, and look at what disagreement tells us about ourselves: namely, that we don't think all that highly of each other's rationality.

This, for me, is the take-away from Tyler Cowen and Robin Hanson's paper Are Disagreements Honest? In the paper, Cowen and Hanson define honest disagreement as meaning that "meaning that the disputants respect each other’s relevant abilities, and consider each person’s stated opinion to be his best estimate of the truth, given his information and effort," and they argue disagreements aren't honest in this sense.

I don't find this conclusion surprising. In fact, I suspect that while people sometimes do mean it when they talk about respectful disagreement, often they realize this is a polite fiction (which isn't necessarily a bad thing). Deep down, they know that disagreement is disrespect, at least in the sense of not thinking that highly of the other person's rationality. That people know this is shown in the fact that they don't like being told they're wrong—the reason why Dale Carnegie says you can't win an argument

On LessWrong, people are quick to criticize each others' views, so much so that I've heard people cite this as a reason to be reluctant to post/comment (again showing they know intuitively that disagreement is disrespect). Furthermore when people in LessWrong criticize others' views, they very often don't seem to expect to quickly reach agreement. Even people Yvain would classify as "experienced rationalists" sometimes knowingly have persistent disagreements. This suggests that LessWrongers almost never consider each other to be perfect rationalists.

And I actually think this is a sensible stance. For one thing, even if you met a perfect rationalist, it could be hard to figure out that they are one. Furthermore, the problem of knowing what to do about disagreement is made harder when you're faced with other people having persistent disagreements: if you find yourself agreeing with Alice, you'll have to think Bob is being irrational, and vice versa. If you rate them equally rational and adopt an intermediate view, you'll have to think they're both being a bit irrational for not doing likewise.

The situation is similar to Moore's paradox in philosophy—the impossibility of asserting "it's raining, but I don't believe it's raining." Or, as you might say, "Of course I think my opinions are right and other people's are wrong. Otherwise I'd change my mind." Similarly, when we think about disagreement, it seems like we're forced to say, "Of course I think my opinions are rational and other people's are irrational. Otherwise I'd change my mind."

We can find some room for humility in an analog of the preface paradox, the fact that the author of a book can say things like "any errors that remain are mine." We can say this because we might think each individual claim in the book is highly probable, while recognize that all the little uncertainties add up to it being likely there are still errors. Similarly, we can think each of our beliefs are individually rational, while recognizing we still probably have some irrational beliefs—we just don't know which ones And just because respectful disagreement is a polite fiction doesn't mean we should abandon it. 

I don't have a clear sense of how controversial the above will be. Maybe we all already recognize that we don't respect each other's opinions 'round these parts. But I think some features of discussion at LessWrong look odd in light of the above points about disagreement—including some of the things people say about disagreement.

The wiki, for example, says that "Outside of well-functioning prediction markets, Aumann agreement can probably only be approximated by careful deliberative discourse. Thus, fostering effective deliberation should be seen as a key goal of Less Wrong." The point of Aumann's agreement theorem, though, is precisely that ideal rationalists shouldn't need to engage in deliberative discourse, as usually conceived, in order to reach agreement.

As Cowen and Hanson put it, "Merely knowing someone else’s opinion provides a powerful summary of everything that person knows, powerful enough to eliminate any differences of opinion due to differing information." So sharing evidence the normal way shouldn't be necessary. Asking someone "what's the evidence for that?" implicitly says, "I don't trust your rationality enough to take your word for it." But when dealing with real people who may or may not have a rational basis for their beliefs, that's almost always the right stance to take.

Intelligence and Rationality

Intelligence does not equal rationality. Need I say more? Not long ago, I wouldn't have thought so. I would have thought it was a fundamental premise behind LessWrong, indeed behind old-school scientific skepticism. As Michael Shermer once said, "Smart people believe weird things because they are skilled at defending beliefs they arrived at for non-smart reasons."

Yet I've heard people suggest that you must never be dismissive of things said by smart people, or that the purportedly high IQ of the LessWrong community means people here don't make bad arguments. When I hear that, I think "whaaat? People on LessWrong make bad arguments all the time!" When this happens, I generally limit myself to trying to point out the flaw in the argument and/or downvoting, and resist the urge to shout "YOUR ARGUMENTS ARE BAD AND YOU SHOULD FEEL BAD." I just think it.

When I reach for an explanation of why terrible arguments from smart people shouldn't surprise anyone, I go to Yvain's Intellectual Hipsters and Meta-Contarianism, one of my favorite LessWrong posts of all time. While Yvain notes that meta-contrarianism often isn't a good thing, though, on re-reading it I noticed what seems like an important oversight:

A person who is somewhat upper-class will conspicuously signal eir wealth by buying difficult-to-obtain goods. A person who is very upper-class will conspicuously signal that ey feels no need to conspicuously signal eir wealth, by deliberately not buying difficult-to-obtain goods.

A person who is somewhat intelligent will conspicuously signal eir intelligence by holding difficult-to-understand opinions. A person who is very intelligent will conspicuously signal that ey feels no need to conspicuously signal eir intelligence, by deliberately not holding difficult-to-understand opinions.

According to the survey, the average IQ on this site is around 145. People on this site differ from the mainstream in that they are more willing to say death is bad, more willing to say that science, capitalism, and the like are good, and less willing to say that there's some deep philosophical sense in which 1+1 = 3. That suggests people around that level of intelligence have reached the point where they no longer feel it necessary to differentiate themselves from the sort of people who aren't smart enough to understand that there might be side benefits to death.

The pattern of countersignaling Yvain describes here is real. But it's important not to forget that sometimes, the super-wealthy signal their wealth by buying things even the moderately wealthy can't afford. And sometimes, the very intelligent signal their intelligence by holding opinions even the moderately intelligent have trouble understanding. You also get hybrid status moves: designer versions of normally low-class clothes, complicated justifications for opinions normally found among the uneducated.

Robin Hanson has argued that this leads to biases in academia:

I’ve argued that the main social function of academia is to let students, patrons, readers, etc. affiliate with credentialed-as-impressive minds. If so, academic beliefs are secondary – the important thing is to clearly show respect to those who make impressive displays like theorems or difficult data analysis. And the obvious way for academics to use their beliefs to show respect for impressive folks is to have academic beliefs track the most impressive recent academic work.

Robin's post focuses on economics, but I suspect the problem is even worse in my home field of philosophy. As I've written before, the problem is that in philosophy, philosophers never agree on whether a philosopher has solved a problem. Therefore, there can be no rewards for being right, only rewards for showing off your impressive intellect. This often means finding clever ways to be wrong.

I need to emphasize that I really do think philosophers are showing off real intelligence, not merely showing off faux-cleverness. GRE scores suggest philosophers are among the smartest academics, and their performance is arguably made more impressive by the fact that GRE quant scores are bimodally distributed based on whether your major required you to spend four years practicing your high school math, with philosophy being one of the majors that doesn't grant that advantage. Based on this, if you think it's wrong to dismiss the views of high-IQ people, you shouldn't be dismissive of mainstream philosophy. But in fact I think LessWrong's oft-noticed dismissiveness of mainstream philosophy is largely justified.

I've found philosophy of religion in particular to be a goldmine of terrible arguments made by smart people. Consider Alvin Plantinga's modal ontological argument. The argument is sufficiently difficult to understand that I won't try to explain it here. If you want to understand it, I'm not sure what to tell you except to maybe read Plantinga's book The Nature of NecessityIn fact, I predict at least one LessWronger will comment on this thread with an incorrect explanation or criticism of the argument. Which is not to say they wouldn't be smart enough to understand it, just that it might take them a few iterations of getting it wrong to finally get it right. And coming up with an argument like that is no mean feat—I'd guess Plantinga's IQ is just as high as the average LessWronger's.

Once you understand the modal ontological argument, though, it quickly becomes obvious that Plantinga's logic works just as well to "prove" that it's a necessary truth that pigs fly. Or that Plantinga's god does not exist. Or even as a general purpose "proof" of any purported mathematical truth you please. The main point is that Plantinga's argument is not stupid in the sense of being something you'd only come up with if you had a low IQ—the opposite is true. But Plantinga's argument is stupid in the sense of being something you'd only come up with it while under the influence of some serious motivated reasoning.

The modal ontological argument is admittedly an extreme case. Rarely is the chasm between the difficulty of the concepts underlying an argument, and the argument's actual merits, so vast. Still, beware the temptation to affiliate with smart people by taking everything they say seriously.

Edited to add: in the original post, I intended but forgot to emphasize that I think the correlation between IQ and rationality is weak at best. Do people disagree? Does anyone want to go out on a limb and say, "They aren't the same thing, but the correlation is still very strong?"

The Principle of Charity

I've made no secret of the fact that I'm not a big fan of the principle of charity—often defined as the rule that you should interpret other people's arguments on the assumption that they are not saying anything stupid. The problem with this is that other people are often saying something stupid. Because of that, I think charitable is over-rated compared to fair and accurate reading. When someone says something stupid, you don't have to pretend otherwise, but it's really important not to attribute to people stupid things they never said.

More frustrating than this simple disagreement over charity, though, is when people who invoke the principle of charity do so selectively. They apply it to people who's views they're at least somewhat sympathetic to, but when they find someone they want to attack, they have trouble meeting basic standards of fairness. And in the most frustrating cases, this gets explicit justification: "we need to read these people charitably, because they are obviously very intelligent and rational." I once had a member of the LessWrong community actually tell me, "You need to interpret me more charitably, because you know I'm sane." "Actually, buddy, I don't know that," I wanted to reply—but didn't, because that would've been rude.

I can see benefits to the principle of charity. It helps avoid flame wars, and from a Machiavellian point of view it's nice to close off the "what I actually meant was..." responses. Whatever its merits, though, they can't depend on the actual intelligence and rationality of the person making an argument. Not only is intelligence no guarantee against making bad arguments, the whole reason we demand other people tell us their reasons for their opinions in the first place is we fear their reasons might be bad ones.

As I've already explained, there's a difficult problem here about how to be appropriately modest about our own rationality. When I say something, I never think it's stupid, otherwise I wouldn't say it. But at least I'm not so arrogant as to go around demanding other people acknowledge my highly advanced rationality. I don't demand that they accept "Chris isn't saying anything stupid" as an axiom in order to engage with me.

Beware Weirdness for Weirdness' Sake

There's a theory in the psychology and sociology of religion that the purpose of seemingly foolish rituals like circumcision and snake-handling is to provide a costly and therefore hard-to-fake signal of group commitment. I think I've heard it suggested—though I can't find by who—that crazy religious doctrines could serve a similar purpose. It's easy to say you believe in a god, but being willing to risk ridicule by saying you believe in one god who is three persons, who are all the same god, yet not identical to each other, and you can't explain how that is but it's a mystery you accept on faith... now that takes dedication.

Once you notice the general "signal group commitment in costly ways" strategy, it seems to crop up everywhere. Subcultures often seem to go out of their way to be weird, to do things that will shock people outside the subculture, ranging from tattoos and weird clothing to coming up with reasons why things regarded as normal and innocuous in the broader culture are actually evil. Even something as simple as a large body of jargon and in-jokes can do the trick: if someone takes the time to learn all the jargon and in-jokes, you know they're committed.

This tendency is probably harmless when done with humor and self-awareness, but it's more worrisome when a group becomes convinced its little bits of weirdness for weirdness' sake are a sign of its superiority to other groups. And it's worth being aware of, because it makes sense of signaling moves that aren't straightforwardly plays for higher status.

The LessWrong community has amassed a truly impressive store of jargon and in-jokes over the years, and some of it's quite useful (I reiterate my love for the term "meta-contrarian"). But as with all jargon, LessWrongian jargon is often just a silly way of saying things you could have said without it. For example, people say "I have a poor mental model of..." when they could have just said they don't understand it very well.

That bit of LessWrong jargon is merely silly. Worse, I think, is the jargon around politics. Recently, a friend gave "they avoid blue-green politics" as a reason LessWrongians are more rational than other people. It took a day before it clicked that "blue-green politics" here basically just meant "partisanship." But complaining about partisanship is old hat—literally. America's founders were fretting about it back in the 18th century. Nowadays, such worries are something you expect to hear from boringly middle-brow columnists at major newspapers, not edgy contrarians.

But "blue-green politics," "politics is the mind-killer"... never mind how much content they add, the point is they're obscure enough to work as an excuse to feel superior to anyone whose political views are too mainstream. Outsiders will probably think you're weird, invoking obscure jargon to quickly dismiss ideas that seem plausible to them, but on the upside you'll get to bond with members of your in-group over your feelings of superiority.

A More Humble Rationalism?

I feel like I should wrap up with some advice. Unfortunately, this post was motivated by problems I'd seen, not my having thought of brilliant solutions to them. So I'll limit myself to some fairly boring, non-brilliant advice.

First, yes, some claims are more rational than others. Some people even do better at rationality overall than others. But the idea of a real person being anything close to an ideal rationalist is an extraordinary claim, and should be met with appropriate skepticism and demands for evidence. Don't forget that.

Also, beware signaling games. A good dose of Hansonian cynicism, applied to your own in-group, is healthy. Somewhat relatedly, I've begun to wonder if "rationalism" is really good branding for a movement. Rationality is systematized winning, sure, but the "rationality" branding isn't as good for keeping that front and center, especially compared to, say the effective altruism meme. It's just a little too easy to forget where "rationality" is supposed to connect with the real world, increasing the temptation for "rationality" to spiral off into signaling games.

395 comments

Comments sorted by top scores.

comment by Wei Dai (Wei_Dai) · 2014-03-01T09:21:52.666Z · LW(p) · GW(p)

So sharing evidence the normal way shouldn't be necessary. Asking someone "what's the evidence for that?" implicitly says, "I don't trust your rationality enough to take your word for it."

I disagree with this, and explained why in Probability Space & Aumann Agreement . To quote the relevant parts:

There are some papers that describe ways to achieve agreement in other ways, such as iterative exchange of posterior probabilities. But in such methods, the agents aren't just moving closer to each other's beliefs. Rather, they go through convoluted chains of deduction to infer what information the other agent must have observed, given his declarations, and then update on that new information. (The process is similar to the one needed to solve the second riddle on this page.) The two agents essentially still have to communicate I(w) and J(w) to each other, except they do so by exchanging posterior probabilities and making logical inferences from them.

Is this realistic for human rationalist wannabes? It seems wildly implausible to me that two humans can communicate all of the information they have that is relevant to the truth of some statement just by repeatedly exchanging degrees of belief about it, except in very simple situations. You need to know the other agent's information partition exactly in order to narrow down which element of the information partition he is in from his probability declaration, and he needs to know that you know so that he can deduce what inference you're making, in order to continue to the next step, and so on. One error in this process and the whole thing falls apart. It seems much easier to just tell each other what information the two of you have directly.

In other words, when I say "what's the evidence for that?", it's not that I don't trust your rationality (although of course I don't trust your rationality either), but I just can't deduce what evidence you must have observed from your probability declaration alone even if you were fully rational.

Replies from: JWP, paulfchristiano, ChrisHallquist, RobinZ, PeterDonis, Gunnar_Zarncke
comment by JWP · 2014-03-01T16:20:23.203Z · LW(p) · GW(p)

when I say "what's the evidence for that?", it's not that I don't trust your rationality (although of course I don't trust your rationality either), but I just can't deduce what evidence you must have observed from your probability declaration alone even if you were fully rational.

Yes. There are reasons to ask for evidence that have nothing to do with disrespect.

  • Even assuming that all parties are perfectly rational and that any disagreement must stem from differing information, it is not always obvious which party has better relevant information. Sharing evidence can clarify whether you know something that I don't, or vice versa.

  • Information is a good thing; it refines one's model of the world. Even if you are correct and I am wrong, asking for evidence has the potential to add your information to my model of the world. This is preferable to just taking your word for the conclusion, because that information may well be relevant to more decisions than the topic at hand.

comment by paulfchristiano · 2014-03-11T01:15:03.561Z · LW(p) · GW(p)

There is truth to this sentiment, but you should keep in mind results like this one by Scott Aaronson, that the amount of info that people actually have to transmit is independent of the amount of evidence that they have (even given computational limitations).

It seems like doubting each other's rationality is a perfectly fine explanation. I don't think most people around here are perfectly rational, nor that they think I'm perfectly rational, and definitely not that they all think that I think they are perfectly rational. So I doubt that they've updated enough on the fact that my views haven't converged towards theirs, and they may be right that I haven’t updated enough on the fact that their views haven’t converged towards mine.

In practice we live in a world where many pairs of people disagree, and you have to disagree with a lot of people. I don’t think the failure to have common knowledge is much of a vice, either of me or my interlocutor. It’s just a really hard condition.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2014-03-11T08:17:36.180Z · LW(p) · GW(p)

There is truth to this sentiment, but you should keep in mind results like this one by Scott Aaronson, that the amount of info that people actually have to transmit is independent of the amount of evidence that they have (even given computational limitations).

The point I wanted to make was that AFAIK there is currently no practical method for two humans to reliably reach agreement on some topic besides exchanging all the evidence they have, even if they trust each other to be as rational as humanly possible. The result by Scott Aaronson may be of theoretical interest (and maybe even of practical use by future AIs that can perform exact computations with the information in their minds), but seem to have no relevance to humans faced with real-world (i.e., as opposed to toy examples) disagreements.

I don’t think the failure to have common knowledge is much of a vice, either of me or my interlocutor. It’s just a really hard condition.

I don't understand this. Can you expand?

Replies from: Lumifer
comment by Lumifer · 2014-03-11T15:38:26.261Z · LW(p) · GW(p)

there is currently no practical method for two humans to reliably reach agreement on some topic besides exchanging all the evidence they have

Huh? There is currently no practical method for two humans to reliably reach agreement on some topic, full stop. Exchanging all evidence might help, but given that we are talking about humans and not straw Vulcans, it is still not a reliable method.

comment by ChrisHallquist · 2014-03-02T22:55:08.766Z · LW(p) · GW(p)

There are some papers that describe ways to achieve agreement in other ways, such as iterative exchange of posterior probabilities. But in such methods, the agents aren't just moving closer to each other's beliefs. Rather, they go through convoluted chains of deduction to infer what information the other agent must have observed, given his declarations, and then update on that new information. (The process is similar to the one needed to solve the second riddle on this page.) The two agents essentially still have to communicate I(w) and J(w) to each other, except they do so by exchanging posterior probabilities and making logical inferences from them.

Is this realistic for human rationalist wannabes? It seems wildly implausible to me that two humans can communicate all of the information they have that is relevant to the truth of some statement just by repeatedly exchanging degrees of belief about it, except in very simple situations. You need to know the other agent's information partition exactly in order to narrow down which element of the information partition he is in from his probability declaration, and he needs to know that you know so that he can deduce what inference you're making, in order to continue to the next step, and so on. One error in this process and the whole thing falls apart. It seems much easier to just tell each other what information the two of you have directly.

I won't try to comment on the formal argument (my understanding that literature is mostly just what Robin Hanson has said about it), but intuitively, this seems wrong. It seems like two people trading probability estimates shouldn't need to deduce exactly what the other has observed, they just need to make inferences along the lines of, "wow, she wasn't swayed as much as I expected by me telling her my opinion, she must think she has some pretty good evidence." At least that's the inference you would make if you both knew you trust each other's rationality. More realistically, of course, the correct inference is usually "she wasn't swayed by me telling her my opinion, she doesn't just trust me to be rational."

Consider what would have to happen for two rationalists who knowingly trust each other's rationality to have a persistent disagreement. Because of conservation of expected evidence, Alice has to think her probability estimate would on average remain the same after hearing Bob's evidence, and Bob must think the same about hearing Alice's evidence. That seems to suggest they both must think they have better, more relevant evidence to the question at hand. And might be perfectly reasonable for them to think that at first.

But after several rounds of sharing their probability estimates and seeing the other not budge, Alice will have to realize Bob thinks he's better informed about the topic than she is. And Bob will have to realize the same about Alice. And if they both trust each other's rationality, Alice will have to think, "I thought I was better informed than Bob about this, but it looks like Bob thinks he's the one who's better informed, so maybe I'm wrong about being better informed." And Bob will have to have the parallel thought. Eventually, they should converge.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-03-02T23:36:00.480Z · LW(p) · GW(p)

I won't try to comment on the formal argument (my understanding that literature is mostly just what Robin Hanson has said about it), but intuitively, this seems wrong.

Wei Dai's description is correct, see here for an example where the final estimate is outside the range of the initial two. And yes, the Aumann agreement theorem does not say what nearly everyone (including Eliezer) seems to intuitively think it says.

Replies from: Will_Newsome
comment by Will_Newsome · 2014-04-04T04:47:07.451Z · LW(p) · GW(p)

And yes, the Aumann agreement theorem does not say what nearly everyone (including Eliezer) seems to intuitively think it says.

Wonder if a list of such things can be constructed. Algorithmic information theory is an example where Eliezer drew the wrong implications from the math and unfortunately much of LessWrong inherited that. Group selection (multi-level selection) might be another example, but less clear cut, as that requires computational modeling and not just interpretation of mathematics. I'm sure there are more and better examples.

comment by RobinZ · 2014-04-23T15:39:05.579Z · LW(p) · GW(p)

In other words, when I say "what's the evidence for that?", it's not that I don't trust your rationality (although of course I don't trust your rationality either), but I just can't deduce what evidence you must have observed from your probability declaration alone even if you were fully rational.

The argument can even be made more general than that: under many circumstances, it is cheaper for us to discuss the evidence we have than it is for us to try to deduce it from our respective probability estimates.

comment by PeterDonis · 2014-03-02T02:48:04.058Z · LW(p) · GW(p)

(although of course I don't trust your rationality either)

I'm not sure this qualifier is necessary. Your argument is sufficient to establish your point (which I agree with) even if you do trust the other's rationality.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2014-03-02T22:35:33.241Z · LW(p) · GW(p)

Personally, I am entirely in favor of the "I don't trust your rationality either" qualifier.

Replies from: PeterDonis
comment by PeterDonis · 2014-03-03T16:46:13.866Z · LW(p) · GW(p)

Is that because you think it's necessary to Wei_Dai's argument, or just because you would like people to be up front about what they think?

comment by Gunnar_Zarncke · 2014-03-01T21:54:14.330Z · LW(p) · GW(p)

Yes. But it entirely depends on how the request for supportive references is phrased.

Good:

Interesting point. I'm not entirely clear how you arrived at that position. I'd like to look up some detail questions on that. Could you provide references I might look at?

Bad:

That argument makes no sense. What references do you have to support such a ridiculous claim?

The neutral

What's the evidence for that?

leaves the interpretation of the attitude to the reader/addressee and is bound to be misinterpreted (people misinterpreting tone or meaning of email).

Replies from: ChrisHallquist
comment by ChrisHallquist · 2014-03-02T23:02:42.303Z · LW(p) · GW(p)

Saying

Interesting point. I'm not entirely clear how you arrived at that position. I'd like to look up some detail questions on that. Could you provide references I might look at?

sort of implies you're updating towards the other's position. If you not only disagree but are totally unswayed by hearing the other person's opinion, it becomes polite but empty verbiage (not that polite but empty verbiage is always a bad thing).

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-03-02T23:09:07.305Z · LW(p) · GW(p)

But shouldn't you always update toward the others position? And if the argument isn't convincing you can truthfully tell so that you updated only slightly.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-03-02T23:30:33.464Z · LW(p) · GW(p)

But shouldn't you always update toward the others position?

That's not how Aumann's theorem works. For example, if Alice mildly believe X and Bob strongly believes X, it may be that Alice has weak evidence for X, and Bob has much stronger independent evidence for X. Thus, after exchanging evidence they'll both believe X even more strongly than Bob did initially.

Replies from: palladias, Viliam_Bur, Will_Newsome, Gunnar_Zarncke
comment by palladias · 2014-03-04T03:14:00.906Z · LW(p) · GW(p)

Yup!

One related use case is when everyone in a meeting prefers policy X to policy Y, although each are a little concerned about one possible problem. Going around the room and asking everyone how likely they think X is to succeed produces estimates of 80%, so, having achieved consensus, they adopt X.

But, if people had mentioned their particular reservations, they would have noticed they were all different, and that, once they'd been acknowledged, Y was preferred.

comment by Viliam_Bur · 2014-03-03T13:21:16.738Z · LW(p) · GW(p)

Even if they both equally strongly believe X, it makes sense for them to talk whether they both used the same evidence or different evidence.

comment by Gunnar_Zarncke · 2014-03-03T07:36:32.395Z · LW(p) · GW(p)

Of course.

I agree that

"Interesting point. I'm not entirely clear how you arrived at that position. I'd like to look up some detail questions on that. Could you provide references I might look at?"

doesn't make clear that the other holds another position and that the reply may just address the validity of the evidence.

But even then shouldn't you see it at least as weak evidence and thus believe X at least a bit more strongly?

comment by Scott Alexander (Yvain) · 2014-03-02T07:50:21.302Z · LW(p) · GW(p)

I interpret you as making the following criticisms:

1. People disagree with each other, rather than use Aumann agreement, which proves we don't really believe we're rational

Aside from Wei's comment, I think we also need to keep track of what we're doing.

If we were to choose a specific empirical fact or prediction - like "Russia will invade Ukraine tomorrow" - and everyone on Less Wrong were to go on Prediction Book and make their prediction and we took the average - then I would happily trust that number more than I would trust my own judgment. This is true across a wide variety of different facts.

But this doesn't preclude discussion. Aumann agreement is a way of forcing results if forcing results were our only goal, but we can learn more by trying to disentangle our reasoning processes. Some advantages to talking about things rather than immediately jumping to Aumann:

  • We can both increase our understanding of the issue.

  • We may find a subtler position we can both agree on. If I say "California is hot" and you say "California is cold", instead of immediately jumping to "50% probability either way" we can work out which parts of California are hot versus cold at which parts of the year.

  • We may trace part of our disagreement back to differing moral values. If I say "capital punishment is good" and you say "capital punishment is bad", then it may be right for me to adjust a little in your favor since you may have evidence that many death row inmates are innocent, but I may also find that most of the force of your argument is just that you think killing people is never okay. Depending on how you feel about moral facts and moral uncertainty, we might not want to Aumann adjust this one. Nearly everything in politics depends on moral differences at least a little.

  • We may trace our disagreement back to complicated issues of worldview and categorization. I am starting to interpret most liberal-conservative issues as a tendency to draw Schelling fences in different places and then correctly reason with the categories you've got. I'm not sure if you can Aumann-adjust that away, but you definitely can't do it without first realizing it's there, which takes some discussion.

So although I would endorse Aumann-adjusting as a final verdict with many of the people on this site, I think it's great that we have discussions - even heated discussions - first, and I think a lot of those discussions might look from the outside like disrespect and refusal to Aumann adjust.

2. It is possible that high IQ people can be very wrong and even in a sense "stupidly" wrong, and we don't acknowledge this enough.

I totally agree this is possible.

The role that IQ is playing here is that of a quasi-objective Outside View measure of a person's ability to be correct and rational. It is, of course, a very very lossy measure that often goes horribly wrong. On the other hand, it makes a useful counterbalance to our subjective measure of "I feel I'm definitely right; this other person has nothing to teach me."

So we have two opposite failure modes to avoid here. The first failure mode is the one where we fetishize the specific IQ number even when our own rationality tells us something is wrong - like Plantiga being apparently a very smart individual, but his arguments being terribly flawed. The second failure mode is the one where we're too confident in our own instincts, even when the numbers tell us the people on the other side are smarter than we are. For example, a creationist says "I'm sure that creationism is true, and it doesn't matter whether really fancy scientists who use big words tell me it isn't."

We end up in a kind of bravery debate situation here, where we have to decide whether it's worth warning people more against the first failure mode (at the risk it will increase the second), or against the second failure mode more (at the risk that it will increase the first).

And, well, studies pretty universally find everyone is overconfident of their own opinions. Even the Less Wrong survey finds people here to be really overconfident.

So I think it's more important to warn people to be less confident they are right about things. The inevitable response is "What about creationism?!" to which the counterresponse is "Okay, but creationists are stupid, be less confident when you disagree with people as smart or smarter than you."

This gets misinterpreted as IQ fetishism, but I think it's more of a desperate search for something, anything to fetishize other than our own subjective feelings of certainty.

3. People are too willing to be charitable to other people's arguments.

This is another case where I think we're making the right tradeoff.

Once again there are two possible failure modes. First, you could be too charitable, and waste a lot of time engaging with people who are really stupid, trying to figure out a smart meaning to what they're saying. Second, you could be not charitable enough by prematurely dismissing an opponent without attempting to understand her, and so perhaps missing out on a subtler argument that proves she was right and you were wrong all along.

Once again, everyone is overconfident. No one is underconfident. People tell me I am too charitable all the time, and yet I constantly find I am being not-charitable-enough, unfairly misinterpreting other people's points, and so missing or ignoring very strong arguments. Unless you are way way way more charitable than I am, I have a hard time believing that you are anywhere near the territory where the advice "be less charitable" is more helpful than the advice "be more charitable".

As I said above, you can try to pinpoint where to apply this advice. You don't need to be charitable to really stupid people with no knowledge of a field. But once you've determined someone is in a reference class where there's a high prior on them having good ideas - they're smart, well-educated, have a basic committment to rationality - advising that someone be less charitable to these people seems a lot like advising people to eat more and exercise less - it might be useful in a couple of extreme cases, but I really doubt it's where the gain for the average person lies.

In fact, it's hard for me to square your observation that we still have strong disagreements with your claim that we're too charitable. At least one side is getting things wrong. Shouldn't they be trying to pay a lot more attention to the other side's arguments?

I feel like utter terror is underrated as an epistemic strategy. Unless you are some kind of freakish mutant, you are overconfident about nearly everything and have managed to build up very very strong memetic immunity to arguments that are trying to correct this. Charity is the proper response to this, and I don't think anybody does it enough.

4. People use too much jargon.

Yeah, probably.

There are probably many cases in which the jargony terms have subtly different meaning or serve as reminders of a more formal theory and so are useful ("metacontrarian" versus "showoff", for example), but probably a lot of cases where people could drop the jargon without cost.

I think this is a more general problem of people being bad at writing - "utilize" vs. "use" and all that.

5. People are too self-congratulatory and should be humbler

What's weird is that when I read this post, you keep saying people are too self-congratulatory, but to me it sounds more like you're arguing people are being too modest, and not self-congratulatory enough.

When people try to replace their own subjective analysis of who can easily be dismissed ("They don't agree with me; screw them") with something based more on IQ or credentials, they're being commendably modest ("As far as I can tell, this person is saying something dumb, but since I am often wrong, I should try to take the Outside View by looking at somewhat objective indicators of idea quality.")

And when people try to use the Principle of Charity, once again they are being commendably modest ("This person's arguments seem stupid to me, but maybe I am biased or a bad interpreter. Let me try again to make sure.")

I agree that it is an extraordinary claim to believe anyone is a perfect rationalists. That's why people need to keep these kinds of safeguards in place as saving throws against their inevitable failures.

Replies from: ChrisHallquist, None, elharo
comment by ChrisHallquist · 2014-03-03T00:42:28.863Z · LW(p) · GW(p)

So although I would endorse Aumann-adjusting as a final verdict with many of the people on this site, I think it's great that we have discussions - even heated discussions - first, and I think a lot of those discussions might look from the outside like disrespect and refusal to Aumann adjust.

I agree that what look like disrespectful discussions at first could eventually lead to Aumann agreement, but my impression is that there are a lot of persistent disagreements within the online rationalist community. Eliezer's disagreements with Robin Hanson are well-known. My impression is that even people within MIRI have persistent disagreements with each other, though not as big as the Eliezer-Robin disagreements. I don't know for sure Alicorn and I would continue to disagree about the ethics of white lies if we talked it out thoroughly, but it wouldn't remotely surprise me. Et cetera.

The role that IQ is playing here is that of a quasi-objective Outside View measure of a person's ability to be correct and rational. It is, of course, a very very lossy measure that often goes horribly wrong. On the other hand, it makes a useful counterbalance to our subjective measure of "I feel I'm definitely right; this other person has nothing to teach me."

So we have two opposite failure modes to avoid here. The first failure mode is the one where we fetishize the specific IQ number even when our own rationality tells us something is wrong - like Plantiga being apparently a very smart individual, but his arguments being terribly flawed. The second failure mode is the one where we're too confident in our own instincts, even when the numbers tell us the people on the other side are smarter than we are. For example, a creationist says "I'm sure that creationism is true, and it doesn't matter whether really fancy scientists who use big words tell me it isn't."

I guess I need to clarify that I think IQ is a terrible proxy for rationality, that the correlation is weak at best. And your suggested heuristic will do nothing to stop high IQ crackpots from ignoring the mainstream scientific consensus. Or even low IQ crackpots who can find high IQ crackpots to support them. This is actually a thing that happens with some creationists—people thinking "because I'm an , I can see those evolutionary biologists are talking nonsense." Creationists would do better to attend to the domain expertise of evolutionary biologists. (See also: my post on the statistician's fallacy.)

I'm also curious as to how much of your willingness to agree with me in dismissing Plantinga is based on him being just one person. Would you be more inclined to take a sizeable online community of Plantingas seriously?

Unless you are way way way more charitable than I am, I have a hard time believing that you are anywhere near the territory where the advice "be less charitable" is more helpful than the advice "be more charitable".

As I said above, you can try to pinpoint where to apply this advice. You don't need to be charitable to really stupid people with no knowledge of a field. But once you've determined someone is in a reference class where there's a high prior on them having good ideas - they're smart, well-educated, have a basic committment to rationality - advising that someone be less charitable to these people seems a lot like advising people to eat more and exercise less - it might be useful in a couple of extreme cases, but I really doubt it's where the gain for the average person lies.

On the one hand, I dislike the rhetoric of charity as I see it happen on LessWrong. On the other hand, in practice, you're probably right that people aren't too charitable. In practice, the problem is selective charity—a specific kind of selective charity, slanted towards favoring people's in-group. And you seem to endorse this selective charity.

I've already said why I don't think high IQ is super-relevant to deciding who you should read charitably. Overall education also doesn't strike me as super-relevant either. In the US, better educated Republicans are more likely to deny global warming and think that Obama's a Muslim. That appears to be because (a) you can get a college degree without ever taking a class on climate science and (b) more educated conservatives are more likely to know what they're "supposed" to believe about certain issues. Of course, when someone has a Ph.D. in a relevant field, I'd agree that you should be more inclined to assume they're not saying anything stupid about that field (though even that presumption is weakened if they're saying something that would be controversial among their peers).

As for "basic commitment to rationality," I'm not sure what you mean by that. I don't know how I'd turn it into a useful criterion, aside from defining it to mean people I'd trust for other reasons (e.g. endorsing standard attitudes of mainstream academia). It's quite easy for even creationists to declare their commitment to rationality. On the other hand, if you think someone's membership in the online rationalist community is a strong reason to treat what they say charitably, yeah, I'm calling that self-congratulatory nonsense.

And that's the essence of my reply to your point #5. It's not people having self-congratulatory attitudes on an individual level. It's the self-congratulatory attitudes towards their in-group.

Replies from: Yvain, Solvent, blacktrance
comment by Scott Alexander (Yvain) · 2014-03-03T16:22:06.654Z · LW(p) · GW(p)

I agree that what look like disrespectful discussions at first could eventually lead to Aumann agreement, but my impression is that there are a lot of persistent disagreements within the online rationalist community. Eliezer's disagreements with Robin Hanson are well-known. My impression is that even people within MIRI have persistent disagreements with each other, though not as big as the Eliezer-Robin disagreements. I don't know for sure Alicorn and I would continue to disagree about the ethics of white lies if we talked it out thoroughly, but it wouldn't remotely surprise me. Et cetera.

Are ethics supposed to be Aumann-agreeable? I'm not at all sure the original proof extends that far. If it doesn't, that would cover your disagreement with Alicorn as well as a very large number of other disagreements here.

I don't think it would cover Eliezer vs. Robin, but I'm uncertain how "real" that disagreement is. If you forced both of them to come up with probability estimates for an em scenario vs. a foom scenario, then showed them both each other's estimates and put a gun to their heads and asked them whether they wanted to Aumann-update or not, I'm not sure they wouldn't agree to do so.

Even if they did, it might be consistent with their current actions: if there's a 20% chance of ems and 20% chance of foom (plus 60% chance of unpredictable future, cishuman future, or extinction) we would still need intellectuals and organizations planning specifically for each option, the same way I'm sure the Cold War Era US had different branches planning for a nuclear attack by USSR and a nonnuclear attack by USSR.

I will agree that there are some genuinely Aumann-incompatible disagreements on here, but I bet it's fewer than we think.

I guess I need to clarify that I think IQ is a terrible proxy for rationality, that the correlation is weak at best. And your suggested heuristic will do nothing to stop high IQ crackpots from ignoring the mainstream scientific consensus. Or even low IQ crackpots who can find high IQ crackpots to support them.

So I want to agree with you, but there's this big and undeniable problem we have and I'm curious how you think we should solve it if not through something resembling IQ.

You agree people need to be more charitable, at least toward out-group members. And this would presumably involve taking people whom we are tempted to dismiss, and instead not dismissing them and studying them further. But we can't do this for everyone - most people who look like crackpots are crackpots. There are very likely people who look like crackpots but are actually very smart out there (the cryonicists seem to be one group we can both agree on) and we need a way to find so we can pay more attention to them.

We can't use our subjective feeling of is-this-guy-a-crackpot-or-not, because that's what got us into this problem in the first place. Presumably we should use the Outside View. But it's not obvious what we should be Outside Viewing on. The two most obvious candidates are "IQ" and "rationality", which when applied tend to produce IQ fetishism and in group favoritism (since until Stanovich actually produces his rationality quotient test and gives it to everybody, being in a self-identified rationalist community and probably having read the whole long set of sequences on rationality training is one of the few proxies for rationality we've got available).

I admit both of these proxies are terrible. But they seem to be the main thing keeping us from, on the one side, auto-rejecting all arguments that don't sound subjectively plausible to us at first glance, and on the other, having to deal with every stupid creationist and homeopath who wants to bloviate at us.

There seems to be something that we do do that's useful in this sphere. Like if someone with a site written in ALL CAPS and size 20 font claims that Alzheimers is caused by a bacterium, I dismiss it without a second thought because we all know it's a neurodegenerative disease. But a friend who has no medical training but whom I know is smart and reasonable recently made this claim, I looked it up, and sure enough there's a small but respectable community of microbiologists and neuroscientists investigating that maybe Alzheimers is triggered by an autoimmune response to some bacterium. It's still a long shot, but it's definitely not crackpottish. So somehow I seem to have some sort of ability for using the source of an implausible claim to determine whether I investigate it further, and I'm not sure how to describe the basis on which I make this decision beyond "IQ, rationality, and education".

I'm also curious as to how much of your willingness to agree with me in dismissing Plantinga is based on him being just one person. Would you be more inclined to take a sizeable online community of Plantingas seriously?

Well, empirically I did try to investigate natural law theology based on there being a sizeable community of smart people who thought it was valuable. I couldn't find anything of use in it, but I think it was a good decision to at least double-check.

On the one hand, I dislike the rhetoric of charity as I see it happen on LessWrong. On the other hand, in practice, you're probably right that people aren't too charitable. In practice, the problem is selective charity—a specific kind of selective charity, slanted towards favoring people's in-group. And you seem to endorse this selective charity.

If you think people are too uncharitable in general, but also that we're selectively charitable to the in-group, is that equivalent to saying the real problem is that we're not charitable enough to the out-group? If so, what subsection of the out-group would you recommend we be more charitable towards? And if we're not supposed to select that subsection based on their intelligence, rationality, education, etc, how do we select them?

And if we're not supposed to be selective, how do we avoid spending all our time responding to total, obvious crackpots like creationists and Time Cube Guy?

On the other hand, if you think someone's membership in the online rationalist community is a strong reason to treat what they say charitably, yeah, I'm calling that self-congratulatory nonsense. And that's the essence of my reply to your point #5. It's not people having self-congratulatory attitudes on an individual level. It's the self-congratulatory attitudes towards their in-group.

Yeah, this seems like the point we're disagreeing on. Granted that all proxies will be at least mostly terrible, do you agree that we do need some characteristics that point us to people worth treating charitably? And since you don't like mine, which ones are you recommending?

Replies from: ChrisHallquist, Kawoomba, TheAncientGeek
comment by ChrisHallquist · 2014-03-03T18:09:51.370Z · LW(p) · GW(p)

I question how objective these objective criterion you're talking about are. Usually when we judge someone's intelligence, we aren't actually looking at the results of an IQ test, so that's subjective. Ditto rationality. And if you were really that concerned about education, you'd stop paying so much attention to Eliezer or people who have a bachelors' degree at best and pay more attention to mainstream academics who actually have PhDs.

FWIW, actual heuristics I use to determine who's worth paying attention to are

  • What I know of an individual's track record of saying reasonable things.
  • Status of them and their ideas within mainstream academia (but because everyone knows about this heuristic, you have to watch out for people faking it.
  • Looking for other crackpot warning signs I've picked up over time, e.g. a non-expert claiming the mainstream academic view is not just wrong but obviously stupid, or being more interested in complaining that their views are being suppressed than in arguing for those views.

Which may not be great heuristics, but I'll wager that they're better than IQ (wager, in this case, being a figure of speech, because I don't actually know how you'd adjudicate that bet).

It may be helpful, here, to quote what I hope will be henceforth known as the Litany of Hermione: "The thing that people forget sometimes, is that even though appearances can be misleading, they're usually not."

You've also succeeded in giving me second thoughts about being signed up for cryonics, on the grounds that I failed to consider how it might encourage terrible mental habits in others. For the record, it strikes me as quite possible that mainstream neuroscientists are entirely correct to be dismissive of cryonics—my biggest problem is that I'm fuzzy on what exactly they think about cryonics (more here).

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2014-03-03T19:12:26.508Z · LW(p) · GW(p)

Your heuristics are, in my opinion, too conservative or not strong enough.

Track record of saying reasonable things once again seems to put the burden of decision on your subjective feelings and so rule out paying attention to people you disagree with. If you're a creationist, you can rule out paying attention to Richard Dawkins, because if he's wrong about God existing, about the age of the Earth, and about homosexuality being okay, how can you ever expect him to be right about evolution? If you're anti-transhumanism, you can rule out cryonicists because they tend to say lots of other unreasonable things like that computers will be smarter than humans, or that there can be "intelligence explosions", or that you can upload a human brain.

Status within mainstream academia is a really good heuristic, and this is part of what I mean when I say I use education as a heuristic. Certainly to a first approximation, before investigating a field, you should just automatically believe everything the mainstream academics believe. But then we expect mainstream academia to be wrong in a lot of cases - you bring up the case of mainstream academic philosophy, and although I'm less certain than you are there, I admit I am very skeptical of them. So when we say we need heuristics to find ideas to pay attention to, I'm assuming we've already started by assuming mainstream academia is always right, and we're looking for which challenges to them we should pay attention to. I agree that "challenges the academics themselves take seriously" is a good first step, but I'm not sure that would suffice to discover the critique of mainstream philosophy. And it's very little help at all in fields like politics.

The crackpot warning signs are good (although it's interesting how often basically correct people end up displaying some of them because they get angry at having their ideas rejected and so start acting out, and it also seems like people have a bad habit of being very sensitive to crackpot warning signs the opposing side displays and very obtuse to those their own side displays). But once again, these signs are woefully inadequate. Plantinga doesn't look a bit like a crackpot.

You point out that "Even though appearances can be misleading, they're usually not." I would agree, but suggest you extend this to IQ and rationality. We are so fascinated by the man-bites-dog cases of very intelligent people believing stupid things that it's hard to remember that stupid things are still much, much likelier to be believed by stupid people.

(possible exceptions in politics, but politics is a weird combination of factual and emotive claims, and even the wrong things smart people believe in politics are in my category of "deserve further investigation and charitable treatment".)

You are right that I rarely have the results of an IQ test (or Stanovich's rationality test) in front of me. So when I say I judge people by IQ, I think I mean something like what you mean when you say "a track record of making reasonable statements", except basing "reasonable statements" upon "statements that follow proper logical form and make good arguments" rather than ones I agree with.

So I think it is likely that we both use a basket of heuristics that include education, academic status, estimation of intelligence, estimation of rationality, past track record, crackpot warning signs, and probably some others.

I'm not sure whether we place different emphases on those, or whether we're using about the same basket but still managing to come to different conclusions due to one or both of us being biased.

Replies from: TheAncientGeek, torekp, ChrisHallquist
comment by TheAncientGeek · 2014-04-24T09:55:41.562Z · LW(p) · GW(p)

Has anyone noticed that, given the fact that most of the material on this site is esemtially about philosophy, "academic philosophy sucks" is a Crackpot Warning Sign, ie "don't listen to the hidebound establishment".

Replies from: ChrisHallquist, Vaniver
comment by ChrisHallquist · 2014-07-05T23:41:11.895Z · LW(p) · GW(p)

So I normally defend the "trust the experts" position, and I went to grad school for philosophy, but... I think philosophy may be an area where "trust the experts" mostly doesn't work, simply because with a few exceptions the experts don't agree on anything. (Fuller explanation, with caveats, here.)

Replies from: Protagoras, TheAncientGeek
comment by Protagoras · 2014-07-06T00:49:58.598Z · LW(p) · GW(p)

Also, from the same background, it is striking to me that a lot of the criticisms Less Wrong people make of philosophers are the same as the criticisms philosophers make of one another. I can't really think of a case where Less Wrong stakes out positions that are almost universally rejected by mainstream philosophers. And not just because philosophers disagree so much, though that's also true, of course; it seems rather that Less Wrong people greatly exaggerate how different they are and how much they disagree with the philosophical mainstream, to the extent that any such thing exists (again, a respect in which their behavior resembles how philosophers treat one another).

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-08-17T19:12:42.465Z · LW(p) · GW(p)

Since there is no consensus among philosophers, respecting philosophy is about respecting the process. The negative .claims LW makes about philosophy are indeed similar to the negative claims philosophy makes about itself. LW also makes the positive claim that it has a better, faster method than philosophy but in fact just has a truncated version of the same method.

As Hallquist notes elsewhere

But Alexander misunderstands me when he says I accuse Yudkowsky “of being against publicizing his work for review or criticism.” He’s willing to publish it–but only to enlighten us lesser rationalists. He doesn’t view it as a necessary part of checking whether his views are actually right. That means rejecting the social process of science. That’s a problem.

Or, as I like to put it, if you half bake your bread, then you get your bread quicker...but its half baked,

comment by TheAncientGeek · 2014-07-06T15:02:33.698Z · LW(p) · GW(p)

If what philosophers specialise in clarifying questions, they can trusted to get the question right.

A typical failure mode of amateur philosophy is to substitute easier questions for harder ones.

comment by Vaniver · 2014-04-24T14:31:31.295Z · LW(p) · GW(p)

You might be interested in this article and this sequence (in particular, the first post of that sequence). "Academic philosophy sucks" is a Crackpot Warning Sign because of the implied brevity. A measured, in-depth criticism is one thing; a smear is another.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-04-24T18:05:09.123Z · LW(p) · GW(p)

Read them ,not generally impressed.

comment by torekp · 2014-03-06T01:42:38.994Z · LW(p) · GW(p)

Track record of saying reasonable things once again seems to put the burden of decision on your subjective feelings and so rule out paying attention to people you disagree with.

Counterexample: your own investigation of natural law theology. Another: your investigation of the Alzheimer's bacterium hypothesis. I'd say your own intellectual history nicely demonstrates just how to pull off the seemingly impossible feat of detecting reasonable people you disagree with.

comment by ChrisHallquist · 2014-03-04T02:08:18.122Z · LW(p) · GW(p)

But then we expect mainstream academia to be wrong in a lot of cases - you bring up the case of mainstream academic philosophy, and although I'm less certain than you are there, I admit I am very skeptical of them.

With philosophy, I think the easiest, most important thing for non-experts to notice is that (with a few arguable exceptions are independently pretty reasonable) philosophers basically don't agree on anything. In the case of e.g. Plantinga specifically, non-experts can notice few other philosophers think the modal ontological argument accomplishes anything.

The crackpot warning signs are good (although it's interesting how often basically correct people end up displaying some of them because they get angry at having their ideas rejected and so start acting out...

Examples?

We are so fascinated by the man-bites-dog cases of very intelligent people believing stupid things that it's hard to remember that stupid things are still much, much likelier to be believed by stupid people.

(possible exceptions in politics, but politics is a weird combination of factual and emotive claims, and even the wrong things smart people believe in politics are in my category of "deserve further investigation and charitable treatment".)

I don't think "smart people saying stupid things" reaches anything like man-bites-dog levels of surprisingness. Not only do you have examples from politics, but also from religion. According to a recent study, a little over a third of academics claim that "I know God really exists and I have no doubts about it," which is maybe less than the general public but still a sizeable minority (and the same study found many more academics take some sort of weaker pro-religion stance). And in my experience, even highly respected academics, when they try to defend religion, routinely make juvenile mistakes that make Plantinga look good by comparison. (Remember, I used Plantinga in the OP not because he makes the dumbest mistakes per se but as an example of how bad arguments can signal high intelligence.)

So when I say I judge people by IQ, I think I mean something like what you mean when you say "a track record of making reasonable statements", except basing "reasonable statements" upon "statements that follow proper logical form and make good arguments" rather than ones I agree with.

Proper logical form comes cheap, just add a premise which says, "if everything I've said so far is true, then my conclusion is true." "Good arguments" is much harder to judge, and seems to defeat the purpose of having a heuristic for deciding who to treat charitably: if I say "this guy's arguments are terrible," and you say, "you should read those arguments more charitably," it doesn't do much good for you to defend that claim by saying, "well, he has a track record of making good arguments."

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2014-03-07T20:23:45.788Z · LW(p) · GW(p)

I agree that disagreement among philosophers is a red flag that we should be looking for alternative positions.

But again, I don't feel like that's strong enough enough. Nutrition scientists disagree. Politicians and political scientists disagree. Psychologists and social scientists disagree. Now that we know we can be looking for high-quality contrarians in those fields, how do we sort out the high-quality ones from the lower-quality ones?

Examples?

Well, take Barry Marshall. Became convinced that ulcers were caused by a stomach bacterium (he was right; later won the Nobel Prize). No one listened to him. He said that "my results were disputed and disbelieved, not on the basis of science but because they simply could not be true...if I was right, then treatment for ulcer disease would be revolutionized. It would be simple, cheap and it would be a cure. It seemed to me that for the sake of patients this research had to be fast tracked. The sense of urgency and frustration with the medical community was partly due to my disposition and age."

So Marshall decided since he couldn't get anyone to fund a study, he would study it on himself, drank a serum of bacteria, and got really sick.

Then due to a weird chain of events, his results ended up being published in the Star, a tabloid newspaper that by his own admission "talked about alien babies being adopted by Nancy Reagan", before they made it into legitimate medical journals.

I feel like it would be pretty easy to check off a bunch of boxes on any given crackpot index..."believes the establishment is ignoring him because of their biases", "believes his discovery will instantly solve a centuries-old problem with no side effects", "does his studies on himself", "studies get published in tabloid rather than journal", but these were just things he naturally felt or had to do because the establishment wouldn't take him seriously and he couldn't do things "right".

I don't think "smart people saying stupid things" reaches anything like man-bites-dog levels of surprisingness. Not only do you have examples from politics, but also from religion. According to a recent study, a little over a third of academics claim that "I know God really exists and I have no doubts about it," which is maybe less than the general public but still a sizeable minority

I think it is much much less than the general public, but I don't think that has as much to do with IQ per se as with academic culture. But although I agree that the finding that IQ isn't a stronger predictor of correct beliefs than it is is interesting, I am still very surprised that you don't seem to think it matters at all (or at least significantly). What if we switched gears? Agreeing that the fact that a contrarian theory is invented or held by high IQ people is no guarantee of its success, can we agree that the fact that a contrarian theory is invented and mostly held by low IQ people is a very strong strike against it?

Proper logical form comes cheap, just add a premise which says, "if everything I've said so far is true, then my conclusion is true."

Proper logical form comes cheap, but a surprising number of people don't bother even with that. Do you frequently see people appending "if everything I've said so far is true, then my conclusion is true" to screw with people who judge arguments based on proper logical form?

Replies from: Jiro, ChrisHallquist
comment by Jiro · 2014-03-08T03:12:50.891Z · LW(p) · GW(p)

The extent to which science rejected the ulcer bacterium theory has been exaggerated. (And that article also addresses some quotes from Marshall himself which don't exactly match up with the facts.)

comment by ChrisHallquist · 2014-03-08T19:05:05.526Z · LW(p) · GW(p)

Nutrition scientists disagree. Politicians and political scientists disagree. Psychologists and social scientists disagree. Now that we know we can be looking for high-quality contrarians in those fields, how do we sort out the high-quality ones from the lower-quality ones?

What's your proposal for how to do that, aside from just evaluating the arguments the normal way? Ignore the politicians, and we're basically talking about people who all have PhDs, so education can't be the heuristic. You also proposed IQ and rationality, but admitted we aren't going to have good ways to measure them directly, aside from looking for "statements that follow proper logical form and make good arguments." I pointed out that "good arguments" is circular if we're trying to decide who to read charitably, and you had no response to that.

That leaves us with "proper logical form," about which you said:

Proper logical form comes cheap, but a surprising number of people don't bother even with that. Do you frequently see people appending "if everything I've said so far is true, then my conclusion is true" to screw with people who judge arguments based on proper logical form?

In response to this, I'll just point out that this is not an argument in proper logical form. It's a lone assertion followed by a rhetorical question.

comment by Kawoomba · 2014-03-03T16:32:57.640Z · LW(p) · GW(p)

Are ethics supposed to be Aumann-agreeable?

If they were, uFAI would be a non-issue. (They are not.)

Replies from: pcm, TheAncientGeek
comment by pcm · 2014-03-04T17:12:09.528Z · LW(p) · GW(p)

Ethics ought to be Aumann-agreeable. That would only imply uFAI is a non-issue if AGI developers were ideal Bayesians (improbable) and aware of claims of uFAI risks.

comment by TheAncientGeek · 2014-03-04T18:34:47.677Z · LW(p) · GW(p)

How do you know thatthey are not?

comment by TheAncientGeek · 2014-04-24T09:45:09.567Z · LW(p) · GW(p)

Not being charitable to people isn't a problem, providing you don't mistake your lack of charity for evidence that they are stupid or irrational.

comment by Solvent · 2014-03-03T01:32:07.046Z · LW(p) · GW(p)

I don't know for sure Alicorn and I would continue to disagree about the ethics of white lies if we talked it out thoroughly, but it wouldn't remotely surprise me.

That's a moral disagreement, not a factual disagreement. Alicorn is a deontologist, and you guys probably wouldn't be able to reach consensus on that no matter how hard you tried.

Replies from: fubarobfusco
comment by fubarobfusco · 2014-03-03T01:53:30.691Z · LW(p) · GW(p)

Three somewhat disconnected responses —

For a moral realist, moral disagreements are factual disagreements.

I'm not sure that humans can actually have radically different terminal values from one another; but then, I'm also not sure that humans have terminal values.

It seems to me that "deontologist" and "consequentialist" refer to humans who happen to have noticed different sorts of patterns in their own moral responses — not groups of humans that have fundamentally different values written down in their source code somewhere. ("Moral responses" are things like approving, disapproving, praising, punishing, feeling pride or guilt, and so on. They are adaptations being executed, not optimized reflections of fundamental values.)

comment by blacktrance · 2014-03-03T16:30:06.605Z · LW(p) · GW(p)

the problem is selective charity—a specific kind of selective charity, slanted towards favoring people's in-group.

The danger of this approach is obvious, but it can have benefits as well. You may not know that a particular LessWronger is sane, but you do know that on average LessWrong has higher sanity than the general population. That's a reason to be more charitable.

comment by [deleted] · 2014-03-02T16:08:56.752Z · LW(p) · GW(p)

So although I would endorse Aumann-adjusting as a final verdict with many of the people on this site, I think it's great that we have discussions - even heated discussions - first, and I think a lot of those discussions might look from the outside like disrespect and refusal to Aumann adjust.

Besides which, we're human beings, not fully-rational Bayesian agents by mathematical construction. Trying to pretend to reason like a computer is a pointless exercise when compared to actually talking things out the human way, and thus ensuring (the human way) that all parties leave better-informed than they arrived.

comment by elharo · 2014-03-02T11:50:41.355Z · LW(p) · GW(p)

IQ is playing here is that of a quasi-objective Outside View measure of a person's ability to be correct and rational.

FYI IQ, whatever it measures, has little to no correlation with either epistemic or instrumental rationality, For extensive discussion of this topic see Keith Stanovich's What Intelligence Tests Miss

In brief, intelligence (as measured by an IQ test), epistemic rationality (the ability to form correct models of the world), and instrumental rationality (the ability to define and carry out effective plans for achieving ones goals) are three different things. A high score on an IQ test does not correlate with enhanced epistemic or instrumental rationality.

For examples, of the lack of correlation between IQ and epistemic rationality, consider the very smart folks you have likely met who have gotten themselves wrapped up in incredibly complex and intellectually challenging belief systems that do not match the world we live in: Objectivism, Larouchism, Scientology, apologetics, etc.

For examples of the lack of correlation between IQ and instrumental rationality, consider the very smart folks you have likely met who cannot get out of their parents basement, and whose impact on the world is limited to posting long threads on Internet forums and playing WoW.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-03-11T10:01:08.114Z · LW(p) · GW(p)

Keith Stanovich's What Intelligence Tests Miss

LW discussion.

comment by shware · 2014-03-01T20:16:41.633Z · LW(p) · GW(p)

A Christian proverb says: “The Church is not a country club for saints, but a hospital for sinners”. Likewise, the rationalist community is not an ivory tower for people with no biases or strong emotional reactions, it’s a dojo for people learning to resist them.

SlateStarCodex

comment by Oscar_Cunningham · 2014-03-01T11:20:17.135Z · LW(p) · GW(p)

People on LW have started calling themselves "rationalists". This was really quite alarming the first time I saw it. People used to use the words "aspiring rationalist" to describe themselves, with the implication that e didn't consider ourselves close to rational yet.

Replies from: JWP, ChrisHallquist, army1987
comment by JWP · 2014-03-01T17:23:38.984Z · LW(p) · GW(p)

Identifying as a "rationalist" is encouraged by the welcome post.

We'd love to know who you are, what you're doing, what you value, how you came to identify as a rationalist

Replies from: Eliezer_Yudkowsky, Oscar_Cunningham
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-03-01T20:15:49.946Z · LW(p) · GW(p)

Edited the most recent welcome post and the post of mine that it linked to.

Does anyone have a 1-syllable synonym for 'aspiring'? It seems like we need to impose better discipline on this for official posts.

Replies from: somervta, CCC, Bugmaster, wwa, brazil84
comment by somervta · 2014-03-04T00:49:44.327Z · LW(p) · GW(p)

Consider "how you came to aspire to rationality/be a rationalist" instead of "identify as an aspiring rationalist".

Or, can the identity language and switch to "how you came to be interested in rationality".

comment by CCC · 2014-03-02T04:56:10.637Z · LW(p) · GW(p)

Looking at a thesaurus, "would-be" may be a suitable synonym.

Other alternatives include 'budding', or maybe 'keen'.

comment by Bugmaster · 2014-03-05T01:13:08.299Z · LW(p) · GW(p)

FWIW, "aspiring rationalist" always sounded quite similar to "Aspiring Champion" to my ears.

That said, why do we need to use any syllables at all to say "aspiring rationalist" ? Do we have some sort of a secret rite or a trial that an aspiring rationalist must pass in order to become a true rationalist ? If I have to ask, does that mean, I'm not a rationalist ? :-/

comment by wwa · 2014-03-02T02:48:30.958Z · LW(p) · GW(p)

demirationalist - on one hand, something already above average, like in demigod. On the other, leaves the "not quite there" feeling. My second best was epirationalist

Didn't find anything better in my opinion, but in case you want to give it a (somewhat cheap) shot yourself... I just looped over this

comment by brazil84 · 2014-03-02T23:22:48.390Z · LW(p) · GW(p)

The only thing I can think of is "na" e.g. in Dune, Feyd Rauthah was the "na-baron," meaning that he had been nominated to succeed the baron. (And in the story he certainly was aspiring to be Baron.)

Not quite what you are asking for but not too far either.

comment by Oscar_Cunningham · 2014-03-01T19:17:58.704Z · LW(p) · GW(p)

And the phrase "how you came to identify as a rationalist" links to the very page where in the comments Robin Hanson suggests not using the term "rationalist", and the alternative "aspiring rationalist" is suggested!

comment by ChrisHallquist · 2014-03-03T07:58:45.096Z · LW(p) · GW(p)

People on LW have started calling themselves "rationalists". This was really quite alarming the first time I saw it. People used to use the words "aspiring rationalist" to describe themselves, with the implication that e didn't consider ourselves close to rational yet.

My initial reaction to this was warm fuzzy feelings, but I don't think it's correct, any more than calling yourself a theist indicates believing you are God. "Rationalist" means believing in rationality (in the sense of being pro-rationality), not believing yourself to be perfectly rational. That's the sense of rationalist that goes back at least as far as Bertrand Russell. In the first paragraph of his "Why I Am A Rationalist", for example, Russell identifies as a rationalist but also says, "We are not yet, and I suppose men and women never will be, completely rational."

This also seems like it would be a futile linguistic fight. A better solution might be to consciously avoid using "rationalist" when talking about Aumann's agreement theorem—use "ideal rationalists" or "perfect rationalist". I also tend to use phrases like "members of the online rationalist community," but that's more to indicate I'm not talking about Russell or Dawkins (much less Descartes).

Replies from: Nornagest
comment by Nornagest · 2014-03-05T01:48:01.985Z · LW(p) · GW(p)

The -ist suffix can mean several things in English. There's the sense of "practitioner of [an art or science, or the use of a tool]" (dentist, cellist). There's "[habitual?] perpetrator of" or "participant in [an act]" (duelist, arsonist). And then there's "adherent of [an ideology, doctrine, or teacher]" (theist, Marxist). Seems to me that the problem has to do with equivocation between these senses as much as with the lack of an "aspiring". And personally, I'm a lot more comfortable with the first sense than the others; you can after all be a bad dentist.

Perhaps we should distinguish between rationaledores and rationalistas? Spanglish, but you get the picture.

Replies from: polymathwannabe, Vaniver
comment by polymathwannabe · 2014-03-05T02:04:38.167Z · LW(p) · GW(p)

The -dor suffix is only added to verbs. The Spanish word would be razonadores ("ratiocinators").

comment by Vaniver · 2014-03-05T15:46:25.307Z · LW(p) · GW(p)

"Reasoner" captures this sense of "someone who does an act," but not quite the "practitioner" sense, and it does a poor job of pointing at the cluster we want to point at.

comment by moridinamael · 2014-03-08T18:42:42.894Z · LW(p) · GW(p)

I've recently had to go on (for a few months) some medication which had the side effect of significant cognitive impairment. Let's hand-wavingly equate this side effect to shaving thirty points off my IQ. That's what it felt like from the inside.

While on the medication, I constantly felt the need to idiot-proof my own life, to protect myself from the mistakes that my future self would certainly make. My ability to just trust myself to make good decisions in the future was removed.

This had far more ramifications than I can go into in a brief comment, but I can generalize by saying that I was forced to plan more carefully, to slow down, to double-check my work. Unable to think as deeply into problems in a freewheeling cognitive fashion, I was forced to break them down carefully on paper and understand that anything I didn't write down would be forgotten.

Basically what I'm trying to say is that being stupider probably forced me to be more rational.

When I went off the medication, I felt my old self waking up again, the size of concepts I could manipulate growing until I could once again comprehend and work on programs I had written before starting the drugs in the first place. I could follow long chains of verbal argument and concoct my own. And ... I pretty much immediately went back to my old problem solving habits of relying on big leaps in insight. Which I don't really blame myself for, because that's sort of what brains are for.

I don't know what the balance is. I don't know how and when to rein in the self-defeating aspects of intelligence. I probably made fewer mistakes when I was dumber but I also did less things period.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2014-03-15T06:50:28.051Z · LW(p) · GW(p)

What medication?

comment by fubarobfusco · 2014-03-01T18:51:17.815Z · LW(p) · GW(p)

One thing I hear you saying here is, "We shouldn't build social institutions and norms on the assumption that members of our in-group are unusually rational." This seems right, and obviously so. We should expect people here to be humans and to have the usual human needs for community, assurance, social pleasantries, and so on; as well as the usual human flaws of defensiveness, in-group biases, self-serving biases, motivated skepticism, and so on.

Putting on the "defensive LW phyggist" hat: Eliezer pointed out a long time ago that knowing about biases can hurt people, and the "clever arguer" is a negative trope throughout that swath of the sequences. The concerns you're raising aren't really news here ...

Taking the hat off again: ... but it's a good idea to remind people of them, anyway!


Regarding jargon: I don't think the "jargon as membership signaling" approach can be taken very far. Sure, signaling is one factor, but there are others, such as —

  • Jargon as context marker. By using jargon that we share, I indicate that I will understand references to concepts that we also share. This is distinct from signaling that we are social allies; it tells you what concepts you can expect me to understand.
  • Jargon as precision. Communities that talk about a particular topic a lot will develop more fine-grained distinctions about it. In casual conversation, a group of widgets is more-or-less the same as a set of widgets; but to a mathematician, "group" and "set" refer to distinct concepts.
  • Jargon as vividness. When a community have vivid stories about a topic, referring to the story can communicate more vividly than merely mentioning the topic. Dropping a Hamlet reference can more vividly convey indecisiveness than merely saying "I am indecisive."
comment by 7EE1D988 · 2014-03-01T10:58:35.185Z · LW(p) · GW(p)

I can see benefits to the principle of charity. It helps avoid flame wars, and from a Machiavellian point of view it's nice to close off the "what I actually meant was..." responses.

Some people are just bad at explaining their ideas correctly (too hasty, didn't reread themselves, not a high enough verbal SAT, foreign mother tongue, inferential distance, etc.), others are just bad at reading and understanding other's ideas correctly (too hasty, didn't read the whole argument before replying, glossed over that one word which changed the whole meaning of a sentence, etc.).

I've seen many poorly explained arguments which I could understand as true or at least pointing in interesting directions, which were summarily ignored or shot down by uncharitable readers.

Replies from: alicey, RobinZ
comment by alicey · 2014-03-01T16:28:32.743Z · LW(p) · GW(p)

i tend to express ideas tersely, which counts as poorly-explained if my audience is expecting more verbiage, so they round me off to the nearest cliche and mostly downvote me

i have mostly stopped posting or commenting on lesswrong and stackexchange because of this

like, when i want to say something, i think "i can predict that people will misunderstand and downvote me, but i don't know what improvements i could make to this post to prevent this. sigh."

revisiting this on 2014-03-14, i consider that perhaps i am likely to discard parts of the frame message and possibly outer message - because, to me of course it's a message, and to me of course the meaning of (say) "belief" is roughly what http://wiki.lesswrong.com/wiki/Belief says it is

for example, i suspect that the use of more intuitively sensible grammar in this comment (mostly just a lack of capitalization) often discards the frame-message-bit of "i might be intelligent" (or ... something) that such people understand from messages (despite this being an incorrect thing to understand)

Replies from: shokwave, TheOtherDave, ThrustVectoring
comment by shokwave · 2014-03-03T05:16:35.252Z · LW(p) · GW(p)

so they round me off to the nearest cliche

I have found great value in re-reading my posts looking for possible similar-sounding cliches, and re-writing to make the post deliberately inconsistent with those.

For example, the previous sentence could be rounded off to the cliche "Avoid cliches in your writing". I tried to avoid that possible interpretation by including "deliberately inconsistent".

Replies from: RobinZ
comment by RobinZ · 2014-04-23T15:34:23.573Z · LW(p) · GW(p)

I like it - do you know if it works in face-to-face conversations?

comment by TheOtherDave · 2014-03-01T16:40:02.588Z · LW(p) · GW(p)

Well, you describe the problem as terseness.
If that's true, it suggests that one set of improvements might involve explaining your ideas more fully and providing more of your reasons for considering those ideas true and relevant and important.

Have you tried that?
If so, what has the result been?

Replies from: alicey
comment by alicey · 2014-03-01T17:58:28.470Z · LW(p) · GW(p)

-

Replies from: TheOtherDave, elharo, jamesf
comment by TheOtherDave · 2014-03-01T21:30:38.900Z · LW(p) · GW(p)

I understand this to mean that the only value you see to non-brevity is its higher success at manipulation.

Is that in fact what you meant?

Replies from: alicey
comment by alicey · 2014-03-14T23:49:31.230Z · LW(p) · GW(p)

-

comment by elharo · 2014-03-01T19:31:05.983Z · LW(p) · GW(p)

In other words, you prefer brevity to clarity and being understood? Something's a little skewed here.

It sounds like you and TheOtherDave have both identified the problem. Assuming you know what the problem is, why not fix it?

It may be that you are incorrect about the cause of the problem, but it's easy enough test your hypothesis. The cost is low and the value of the information gained would be high. Either you're right and brevity is your problem, in which case you should be more verbose when you wish to be understood. Or you're wrong and added verbosity would not make people less inclined to "round you off to the nearest cliche", in which case you could look for other changes to your writing that would help readers understand you better.

Replies from: philh
comment by philh · 2014-03-02T01:07:48.149Z · LW(p) · GW(p)

Well, I think that "be more verbose" is a little like "sell nonapples". A brief post can be expanded in many different directions, and it might not be obvious which directions would be helpful and which would be boring.

comment by jamesf · 2014-03-02T03:27:05.996Z · LW(p) · GW(p)

What does brevity offer you that makes it worthwhile, even when it impedes communication?

Predicting how communication will fail is generally Really Hard, but it's a good opportunity to refine your models of specific people and groups of people.

Replies from: alicey
comment by alicey · 2014-03-14T23:29:40.035Z · LW(p) · GW(p)

improving signal to noise, holding the signal constant, is brevity

when brevity impedes communication, but only with a subset of people, then the reduced signal is because they're not good at understanding brief things, so it is worth not being brief with them, but it's not fun

comment by ThrustVectoring · 2014-03-02T10:57:28.284Z · LW(p) · GW(p)

I suspect that the issue is not terseness, but rather not understanding and bridging the inferential distance between you and your audience. It's hard for me to say more without a specific example.

Replies from: alicey
comment by alicey · 2014-03-14T23:42:11.948Z · LW(p) · GW(p)

revisiting this, i consider that perhaps i am likely to discard parts of the frame message and possibly outer message - because, to me of course it's a message, and to me of course the meaning of (say) "belief" is roughly what http://wiki.lesswrong.com/wiki/Belief says it is

comment by RobinZ · 2014-04-23T16:13:27.497Z · LW(p) · GW(p)

Some people are just bad at explaining their ideas correctly (too hasty, didn't reread themselves, not a high enough verbal SAT, foreign mother tongue, inferential distance, etc.), others are just bad at reading and understanding other's ideas correctly (too hasty, didn't read the whole argument before replying, glossed over that one word which changed the whole meaning of a sentence, etc.).

This understates the case, even. At different times, an individual can be more or less prone to haste, laziness, or any of several possible sources of error, and at times, you yourself can commit any of these errors. I think the greatest value of a well-formulated principle of charity is that it leads to a general trend of "failure of communication -> correction of failure of communication -> valuable communication" instead of "failure of communication -> termination of communication".

I've seen many poorly explained arguments which I could understand as true or at least pointing in interesting directions, which were summarily ignored or shot down by uncharitable readers.

Actually, there's another point you could make along the lines of Jay Smooth's advice about racist remarks, particularly the part starting at 1:23, when you are discussing something in 'public' (e.g. anywhere on the Internet). If I think my opposite number is making bad arguments (e.g. when she is proposing an a priori proof of the existence of a god), I can think of few more convincing avenues to demonstrate to all the spectators that she's full of it than by giving her every possible opportunity to reveal that her argument is not wrong.

Regardless of what benefit you are balancing against a cost, though, a useful principle of charity should emphasize that your failure to engage with someone you don't believe to be sufficiently rational is a matter of the cost of time, not the value of their contribution. Saying "I don't care what you think" will burn bridges with many non-LessWrongian folk; saying, "This argument seems like a huge time sink" is much less likely to.

Replies from: Lumifer
comment by Lumifer · 2014-04-23T16:38:24.318Z · LW(p) · GW(p)

a useful principle of charity should emphasize that your failure to engage with someone you don't believe to be sufficiently rational is a matter of the cost of time, not the value of their contribution.

So if I believe that someone is stupid, mindkilled, etc. and is not capable (at least at the moment) of contributing anything valuable, does this principle emphasize that I should not believe that, or that I should not tell that to this someone?

Replies from: Vaniver, RobinZ, TheAncientGeek, TheAncientGeek
comment by Vaniver · 2014-04-23T18:36:54.900Z · LW(p) · GW(p)

It's not obvious to me that's the right distinction to make, but I do think that the principle of charity does actually result in a map shift relative to the default. That is, an epistemic principle of charity is a correction like one would make with the fundamental attribution error: "I have only seen one example of this person doing X, I should restrain my natural tendency to overestimate the resulting update I should make."

That is, if you have not used the principle of charity in reaching the belief that someone else is stupid or mindkilled, then you should not use that belief as reason to not apply the principle of charity.

Replies from: Lumifer
comment by Lumifer · 2014-04-23T19:02:19.922Z · LW(p) · GW(p)

the principle of charity does actually result in a map shift relative to the default.

What is the default? And is it everyone's default, or only the unenlightened ones', or whose?

This implies that the "default" map is wrong -- correct?

if you have not used the principle of charity in reaching the belief

I don't quite understand that. When I'm reaching a particular belief, I basically do it to the best of my ability -- if I am aware of errors, biases, etc. I will try to correct them. Are you saying that the principle of charity is special in that regard -- that I should apply it anyway even if I don't think it's needed?

An attribution error is an attribution error -- if you recognize it you should fix it, and not apply global corrections regardless.

Replies from: Vaniver, TheAncientGeek
comment by Vaniver · 2014-04-23T20:44:40.434Z · LW(p) · GW(p)

This implies that the "default" map is wrong -- correct?

I am pretty sure that most humans are uncharitable in interpreting the skills, motives, and understanding of someone they see as a debate opponent, yes. This observation is basically the complement of the principle of charity- the PoC exists because "most people are too unkind here; you should be kinder to try to correct," and if you have somehow hit the correct level of kindness, then no further change is necessary.

I don't quite understand that. When I'm reaching a particular belief, I basically do it to the best of my ability -- if I am aware of errors, biases, etc. I will try to correct them. Are you saying that the principle of charity is special in that regard

I think that the principle of charity is like other biases.

that I should apply it anyway even if I don't think it's needed?

This question seems just weird to me. How do you know you can trust your cognitive system that says "nah, I'm not being biased right now"? This calls to mind the statistical prediction rule results, where people would come up with all sorts of stories why their impression was more accurate than linear fits to the accumulated data- but, of course, those were precisely the times when they should have silenced their inner argument and gone with the more accurate rule. The point of these sorts of things is that you take them seriously, even when you generate rationalizations for why you shouldn't take them seriously!

(There are, of course, times when the rules do not apply, and not every argument against a counterbiasing technique is a rationalization. But you should be doubly suspicious against such arguments.)

Replies from: Lumifer
comment by Lumifer · 2014-04-23T20:57:30.933Z · LW(p) · GW(p)

This question seems just weird to me. How do you know you can trust your cognitive system that says "nah, I'm not being biased right now"?

It's weird to me that the question is weird to you X-/

You know when and to what degree you can trust your cognitive system in the usual way: you look at what it tells you and test it against the reality. In this particular case you check whether later, more complete evaluations corroborate your initial perception or there is a persistent bias.

If you can't trust your cognitive system then you get all tangled up in self-referential loops and really have no basis on which to decide by how much to correct your thinking or even which corrections to apply.

Replies from: Vaniver, TheAncientGeek
comment by Vaniver · 2014-04-23T22:01:37.503Z · LW(p) · GW(p)

It's weird to me that the question is weird to you X-/

To me, a fundamental premise of the bias-correction project is "you are running on untrustworthy hardware." That is, biases are not just of academic interest, and not just ways that other people mistakes, but known flaws that you personally should attend to with regards to your own mind.

There's more, but I think in order to explain that better I should jump to this first:

If you can't trust your cognitive system then you get all tangled up in self-referential loops and really have no basis on which to decide by how much to correct your thinking or even which corrections to apply.

You can ascribe different parts of your cognitive system different levels of trust, and build a hierarchy out of them. To illustrate a simple example, I can model myself as having a 'motive-detection system,' which is normally rather accurate but loses accuracy when used on opponents. Then there's a higher-level system that is a 'bias-detection system' which detects how much accuracy is lost when I use my motive-detection system on opponents. Because this is hierarchical, I think it bottoms out in a finite number of steps; I can use my trusted 'statistical inference' system to verify the results from my 'bias-detection' system, which then informs how I use the results from my 'motive-detection system.'

Suppose I just had the motive-detection system, and learned of PoC. The wrong thing to do would be to compare my motive-detection system to itself, find no discrepancy, and declare myself unbiased. "All my opponents are malevolent or idiots, because I think they are." The right thing to do would be to construct the bias-detection system, and actively behave in such a way to generate more data to determine whether or not my motive-detection system is inaccurate, and if so, where and by how much. Only after a while of doing this can I begin to trust myself to know whether or not the PoC is needed, because by then I've developed a good sense of how unkind I become when considering my opponents.

If I mistakenly believe that my opponents are malevolent idiots, I can only get out of that hole by either severing the link between my belief in their evil stupidity and my actions when discussing with them, or by discarding that belief and seeing if the evidence causes it to regrow. I word it this way because one needs to move to the place of uncertainty, and then consider the hypotheses, rather than saying "Is my belief that my opponents are malevolent idiots correct? Well, let's consider all the pieces of evidence that come to mind right now: yes, they are evil and stupid! Myth confirmed."

Which brings us to here:

You know when and to what degree you can trust your cognitive system in the usual way: you look at what it tells you and test it against the reality. In this particular case you check whether later, more complete evaluations corroborate your initial perception or there is a persistent bias.

Your cognitive system has a rather large degree of control over the reality that you perceive; to a large extent, that is the point of having a cognitive system. Unless the 'usual way' of verifying the accuracy of your cognitive system takes that into account, which it does not do by default for most humans, then this will not remove most biases. For example, could you detect confirmation bias by checking whether more complete evaluations corroborate your initial perception? Not really- you need to have internalized the idea of 'confirmation bias' in order to define 'more complete evaluations' to mean 'evaluations where I seek out disconfirming evidence also' rather than just 'evaluations where I accumulate more evidence.'

[Edit]: On rereading this comment, the primary conclusion I was going for- that PoC encompasses both procedural and epistemic shifts, which are deeply entwined with each other- is there but not as clear as I would like.

Replies from: Lumifer
comment by Lumifer · 2014-04-24T16:10:39.216Z · LW(p) · GW(p)

Before I get into the response, let me make a couple of clarifying points.

First, the issue somewhat drifted from "to what degree should you update on the basis of what looks stupid" to "how careful you need to be about updating your opinion of your opponents in an argument". I am not primarily talking about arguments, I'm talking about the more general case of observing someone being stupid and updating on this basis towards the "this person is stupid" hypothesis.

Second, my evaluation of stupidity is based more on how a person argues rather than on what position he holds. To give an example, I know some smart people who have argued against evolution (not in the sense that it doesn't exist, but rather in the sense that the current evolutionary theory is not a good explanation for a bunch of observables). On the other hand, if someone comes in and goes "ha ha duh of course evolution is correct my textbook says so what u dumb?", well then...

"you are running on untrustworthy hardware."

I don't like this approach. Mainly this has to do with the fact that unrolling "untrustworthy" makes it very messy.

As you yourself point out, a mind is not a single entity. It is useful to treat is as a set or an ecology of different agents which have different capabilities, often different goals, and typically pull into different directions. Given this, who is doing the trusting or distrusting? And given the major differences between the agents, what does "trust" even mean?

I find this expression is usually used to mean that human mind is not a simple-enough logical calculating machine. My first response to this is duh! and the second one is that this is a good thing.

Consider an example. Alice, a hetero girl, meets Bob at a party. Bob looks fine, speaks the right words, etc. and Alice's conscious mind finds absolutely nothing wrong with the idea of dragging him into her bed. However her gut instincts scream at her to run away fast -- for no good reason that her consciousness can discern. Basically she has a really bad feeling about Bob for no articulable reason. Should she tell herself her hardware is untrustworthy and invite Bob overnight?

The wrong thing to do would be to compare my motive-detection system to itself, find no discrepancy, and declare myself unbiased.

True, which is why I want to compare to reality, not to itself. If you decided that Mallory is a malevolent idiot and still happen to observe him later on, well, does he behave like one? Does additional evidence support your initial reaction? If it does, you can probably trust your initial reactions more. If it does not, you can't and should adjust.

Yes, I know about anchoring and such. But again, at some point you have to trust yourself (or some modules of yourself) because if you can't there is just no firm ground to stand on at all.

If I mistakenly believe that my opponents are malevolent idiots, I can only get out of that hole by ... discarding that belief and seeing if the evidence causes it to regrow.

I don't see why. Just do the usual Bayesian updating on the evidence. If the weight of the accumulated evidence points out that they are not, well, update. Why do you have to discard your prior in order to do that?

you need to have internalized the idea of 'confirmation bias' in order to define 'more complete evaluations' to mean 'evaluations where I seek out disconfirming evidence also' rather than just 'evaluations where I accumulate more evidence.'

Yep. Which is why the Sequences, the Kahneman & Tversky book, etc. are all very useful. But, as I've been saying in my responses to RobinZ, for me this doesn't fall under the principle of charity, this falls under the principle of "don't be an idiot yourself".

Replies from: Vaniver
comment by Vaniver · 2014-04-24T18:56:56.855Z · LW(p) · GW(p)

First, the issue somewhat drifted from "to what degree should you update on the basis of what looks stupid" to "how careful you need to be about updating your opinion of your opponents in an argument".

I understand PoC to only apply in the latter case, with a broad definition of what constitutes an argument. A teacher, for example, likely should not apply the PoC to their students' answers, and instead be worried about the illusion of transparency and the double illusion of transparency. (Checking the ancestral comment, it's not obvious to me that you wanted to switch contexts- 7EE1D988 and RobinZ both look like they're discussing conservations or arguments- and you may want to be clearer in the future about context changes.)

I am not primarily talking about arguments, I'm talking about the more general case of observing someone being stupid and updating on this basis towards the "this person is stupid" hypothesis.

Here, I think you just need to make fundamental attribution error corrections (as well as any outgroup bias corrections, if those apply).

Given this, who is doing the trusting or distrusting?

Presumably, whatever module sits on the top of the hierarchy (or sufficiently near the top of the ecological web).

Should she tell herself her hardware is untrustworthy and invite Bob overnight?

From just the context given, no, she should trust her intuition. But we could easily alter the context so that she should tell herself that her hardware is untrustworthy and override her intuition- perhaps she has social anxiety or paranoia she's trying to overcome, and a trusted (probably female) friend doesn't get the same threatening vibe from Bob.

True, which is why I want to compare to reality, not to itself. If you decided that Mallory is a malevolent idiot and still happen to observe him later on, well, does he behave like one?

You don't directly perceive reality, though, and your perceptions are determined in part by your behavior, in ways both trivial and subtle. Perhaps Mallory is able to read your perception of him from your actions, and thus behaves cruelly towards you?

As a more mathematical example, in the iterated prisoner's dilemma with noise, TitForTat performs poorly against itself, whereas a forgiving TitForTat performs much better. PoC is the forgiveness that compensates for the noise.

I don't see why.

This is discussed a few paragraphs ago, but this is a good opportunity to formulate it in a way that is more abstract but perhaps clearer: claims about other people's motives or characteristics are often claims about counterfactuals or hypotheticals. Suppose I believe "If I were to greet to Mallory, he would snub me," and thus in order to avoid the status hit I don't say hi to Mallory. In order to confirm or disconfirm that belief, I need to alter my behavior; if I don't greet Mallory, then I don't get any evidence!

(For the PoC specifically, the hypothetical is generally "if I put extra effort into communicating with Mallory, that effort would be wasted," where the PoC argues that you've probably overestimated the probability that you'll waste effort. This is why RobinZ argues for disengaging with "I don't have the time for this" rather than "I don't think you're worth my time.")

But, as I've been saying in my responses to RobinZ, for me this doesn't fall under the principle of charity, this falls under the principle of "don't be an idiot yourself".

I think that "don't be an idiot" is far too terse a package. It's like boiling down moral instruction to "be good," without any hint that "good" is actually a tremendously complicated concept, and being it a difficult endeavor which is aided by many different strategies. If an earnest youth came to you and asked how to think better, would you tell them just "don't be an idiot" or would you point them to a list of biases and counterbiasing principles?

Replies from: RobinZ, Lumifer, TheAncientGeek
comment by RobinZ · 2014-04-24T21:52:24.266Z · LW(p) · GW(p)

I think that "don't be an idiot" is far too terse a package.

In Lumifer's defense, this thread demonstrates pretty conclusively that "the principle of charity" is also far too terse a package. (:

Replies from: Vaniver, Lumifer
comment by Vaniver · 2014-04-24T23:57:35.344Z · LW(p) · GW(p)

For an explanation, agreed; for a label, disagreed. That is, I think it's important to reduce "don't be an idiot" into its many subcomponents, and identify them separately whenever possible,

Replies from: RobinZ
comment by RobinZ · 2014-04-25T00:40:01.397Z · LW(p) · GW(p)

Mm - that makes sense.

comment by Lumifer · 2014-04-25T00:54:47.271Z · LW(p) · GW(p)

"the principle of charity" is also far too terse a package

Well, not quite, I think the case here was/is that we just assign different meanings to these words.

P.S. And here is yet another meaning...

comment by Lumifer · 2014-04-25T00:52:59.357Z · LW(p) · GW(p)

perhaps she has social anxiety or paranoia she's trying to overcome

That's not the case where she shouldn't trust her hardware -- that's the case where her software has a known bug.

In order to confirm or disconfirm that belief, I need to alter my behavior; if I don't greet Mallory, then I don't get any evidence!

Sure, so you have to trade off your need to discover more evidence against the cost of doing so. Sometimes it's worth it, sometimes not.

where the PoC argues that you've probably overestimated the probability that you'll waste effort.

Really? For a randomly sampled person, my prior already is that talking to him/her will be wasted effort. And if in addition to that he offers evidence of stupidity, well... I think you underappreciate opportunity costs -- there are a LOT of people around and most of them aren't very interesting.

I think that "don't be an idiot" is far too terse a package.

Yes, but properly unpacking it will take between one and several books at best :-/

Replies from: Vaniver
comment by Vaniver · 2014-04-25T02:32:49.609Z · LW(p) · GW(p)

That's not the case where she shouldn't trust her hardware -- that's the case where her software has a known bug.

For people, is there a meaningful difference between the two? The primary difference between "your software is buggy" and "your hardware is untrustworthy" that I see is that the first suggests the solution is easier: just patch the bug! It is rarely enough to just know that the problem exists, or what steps you should take to overcome the problem; generally one must train themselves into being someone who copes effectively with the problem (or, rarely, into someone who does not have the problem).

I think you underappreciate opportunity costs -- there are a LOT of people around and most of them aren't very interesting.

I agree there are opportunity costs; I see value in walled gardens. But just because there is value doesn't mean you're not overestimating that value, and we're back to the my root issue that your response to "your judgment of other people might be flawed" seems to be "but I've judged them already, why should I do it twice?"

Yes, but properly unpacking it will take between one and several books at best :-/

Indeed; I have at least a shelf and growing devoted to decision-making and ameliorative psychology.

Replies from: Lumifer
comment by Lumifer · 2014-04-25T03:23:08.911Z · LW(p) · GW(p)

For people, is there a meaningful difference between the two?

Of course. A stroke, for example, is a purely hardware problem. In more general terms, hardware = brain and software = mind.

"but I've judged them already, why should I do it twice?"

I said I will update on the evidence. The difference seems to be that you consider that insufficient -- you want me to actively seek new evidence and I think it's rarely worthwhile.

Replies from: EHeller, Vaniver, None
comment by EHeller · 2014-04-25T04:25:47.124Z · LW(p) · GW(p)

. A stroke, for example, is a purely hardware problem. In more general terms, hardware = brain and software = mind.

I don't think this is a meaningful distinction for people. People can (and often do) have personality changes (and other changes of 'mind') after a stroke.

Replies from: Lumifer
comment by Lumifer · 2014-04-25T14:55:13.822Z · LW(p) · GW(p)

I don't think this is a meaningful distinction for people.

You don't think it's meaningful to model people as having a hardware layer and a software layer? Why?

People can (and often do) have personality changes (and other changes of 'mind') after a stroke.

Why are you surprised that changes (e.g. failures) in hardware affect the software? That seems to be the way these things work, both in biological brains and in digital devices. In fact, humans are unusual in that for them the causality goes both ways: software can and does affect the hardware, too. But hardware affects the software in pretty much every situation where it makes sense to speak of hardware and software.

comment by Vaniver · 2014-04-25T14:58:08.974Z · LW(p) · GW(p)

In more general terms, hardware = brain and software = mind.

Echoing the others, this is more dualistic than I'm comfortable with. It looks to me that in people, you just have 'wetware' that is both hardware and software simultaneously, rather than the crisp distinction that exists between them in silicon.

you want me to actively seek new evidence and I think it's rarely worthwhile.

Correct. I do hope that you noticed that this still relies on a potentially biased judgment (I think it's rarely worthwhile is a counterfactual prediction about what would happen if you did apply the PoC), but beyond that I think we're at mutual understanding.

Replies from: Lumifer
comment by Lumifer · 2014-04-25T15:36:30.017Z · LW(p) · GW(p)

Echoing the others, this is more dualistic than I'm comfortable with

To quote myself, we're talking about "model[ing] people as having a hardware layer and a software layer". And to quote Monty Python, it's only a model. It is appropriate for some uses and inappropriate for others. For example, I think it's quite appropriate for a neurosurgeon. But it's probably not as useful for thinking about biofeedback, to give another example.

I do hope that you noticed that this still relies on a potentially biased judgment

Of course, but potentially biased judgments is all I have. They are still all I have even if I were to diligently apply PoC everywhere.

comment by [deleted] · 2014-04-25T03:37:20.155Z · LW(p) · GW(p)

Of course. A stroke, for example, is a purely hardware problem. In more general terms, hardware = brain and software = mind.

Huh, I don't think I've ever understood that metaphor before. Thanks. It's oddly dualist.

comment by TheAncientGeek · 2014-04-24T20:07:42.410Z · LW(p) · GW(p)

I'll say it again: the PoC isn't at all about when's worth investing effort in talking to someone.

comment by TheAncientGeek · 2014-04-23T21:19:00.808Z · LW(p) · GW(p)

What is the reality about whether you interpreted someone correct.y? When do you hit the bedrock of Real Meaning?

comment by TheAncientGeek · 2014-04-23T20:57:12.877Z · LW(p) · GW(p)

tldr; The principle of charity correct biases you're not aware of.

comment by RobinZ · 2014-04-23T19:01:47.659Z · LW(p) · GW(p)

I see that my conception of the "principle of charity" is either non-trivial to articulate or so inchoate as to be substantially altered by my attempts to do so. Bearing that in mind:

The principle of charity isn't a propositional thesis, it's a procedural rule, like the presumption of innocence. It exists because the cost of false positives is high relative to the cost of reducing false positives: the shortest route towards correctness in many cases is the instruction or argumentation of others, many of whom would appear, upon initial contact, to be stupid, mindkilled, dishonest, ignorant, or otherwise unreliable sources upon the subject in question. The behavior proposed by the principle of charity is intended to result in your being able to reliably distinguish between failures of communication and failures of reasoning.

My remark took the above as a basis and proposed behavior to execute in cases where the initial remark strongly suggests that the speaker is thinking irrationally (e.g. an assertion that the modern evolutionary synthesis is grossly incorrect) and your estimate of the time required to evaluate the actual state of the speaker's reasoning processes was more than you are willing to spend. In such a case, what the principle of charity implies are two things:

  • You should consider the nuttiness of the speaker as being an open question with a large prior probability, akin to your belief prior to lifting a dice cup that you have not rolled double-sixes, rather than a closed question with a large posterior probability, akin to your belief that the modern evolutionary synthesis is largely correct.
  • You should withdraw from the conversation in such a fashion as to emphasize that you are in general willing to put forth the effort to understand what they are saying, but that the moment is not opportune.

Minor tyop fix T1503-4.

Replies from: Lumifer, TheAncientGeek
comment by Lumifer · 2014-04-23T19:14:43.654Z · LW(p) · GW(p)

the cost of false positives is high relative to the cost of reducing false positives

I don't see it as self-evident. Or, more precisely, in some situations it is, and in other situations it is not.

The behavior proposed by the principle of charity is intended to result in your being able to reliably distinguish between failures of communication and failures of reasoning.

You are saying (a bit later in your post) that the principle of charity implies two things. The second one is a pure politeness rule and it doesn't seem to me that the fashion of withdrawing from a conversation will help me "reliably distinguish" anything.

As to the first point, you are basically saying I should ignore evidence (or, rather, shift the evidence into the prior and refuse to estimate the posterior). That doesn't help me reliably distinguish anything either.

In fact, I don't see why there should be a particular exception here ("a procedural rule") to the bog-standard practice of updating on evidence. If my updating process is incorrect, I should fix it and not paper it over with special rules for seemingly-stupid people. If it is reasonably OK, I should just go ahead and update. That will not necessarily result in either a "closed question" or a "large posterior" -- it all depends on the particulars.

Replies from: TheAncientGeek, RobinZ, RobinZ, RobinZ
comment by TheAncientGeek · 2014-04-23T21:44:05.422Z · LW(p) · GW(p)

I'll say it again: POC doesn't mean "believe everyone is sane and intelligent", it means "treat everyone's comments as though they were made by a sane , intelligent, person".

Replies from: TheAncientGeek, satt, Lumifer
comment by TheAncientGeek · 2015-08-17T18:46:17.728Z · LW(p) · GW(p)

Ie, its a defeasible assumption. If you fail, you have evidence that it was a dumb comment. Ift you succeed, you have evidence it wasn't. Either way, you have evidence, and you are not sitting in an echo chamber where your beliefs about people's dumbness go forever untested, because you reject out of hand anything that sounds superficially dumb, .or was made by someone you have labelled , however unjustly,as dumb.

Replies from: Lumifer
comment by Lumifer · 2015-08-17T20:14:18.562Z · LW(p) · GW(p)

your beliefs about people's dumbness go forever untested

That's fine. I have limited information processing capacity -- my opportunity costs for testing other people's dumbness are fairly high.

In the information age I don't see how anyone can operate without the "this is too stupid to waste time on" pre-filter.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-08-18T07:37:28.759Z · LW(p) · GW(p)

The PoC tends to be advised in the context of philosophy, where there is a background assumption of infinite amounts of time to consider things, The resource-constrained version would be to interpret comments charitably once you have, for whatever reason, got into a discussion....with the corollary of reserving some space for "I might be wrong" where you haven't had the resources to test the hypothesis.

Replies from: Lumifer
comment by Lumifer · 2015-08-18T14:20:53.371Z · LW(p) · GW(p)

background assumption of infinite amounts of time to consider things

LOL. While ars may be longa, vita is certainly brevis. This is a silly assumption, better suited for theology, perhaps -- it, at least, promises infinte time. :-)

If I were living in English countryside around XVIII century I might have had a different opinion on the matter, but I do not.

interpret comments charitably once you have, for whatever reason, got into a discussion

It's not a binary either-or situation. I am willing to interpret comments charitably according to my (updateable) prior of how knowledgeable, competent, and reasonable the writer is. In some situations I would stop and ponder, in others I would roll my eyes and move on.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-08-19T06:50:35.249Z · LW(p) · GW(p)

Users report that charitable interpretation gives you more evidence for updating than you would have otherwise.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-08-25T09:40:44.601Z · LW(p) · GW(p)

Are you already optimal? How do you know?

comment by satt · 2015-08-18T00:14:52.669Z · LW(p) · GW(p)

As I operationalize it, that definition effectively waters down the POC to a degree I suspect most POC proponents would be unhappy with.

Sane, intelligent people occasionally say wrong things; in fact, because of selection effects, it might even be that most of the wrong things I see & hear in real life come from sane, intelligent people. So even if I were to decide that someone who's just made a wrong-sounding assertion were sane & intelligent, that wouldn't lead me to treat the assertion substantially more charitably than I otherwise would (and I suspect that the kind of person who likes the(ir conception of the) POC might well say I were being "uncharitable").

Edit: I changed "To my mind" to "As I operationalize it". Also, I guess a shorter form of this comment would be: operationalized like that, I think I effectively am applying the POC already, but it doesn't feel like it from the inside, and I doubt it looks like it from the outside.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-08-18T07:02:55.174Z · LW(p) · GW(p)

You have uncharutably interpreted my formulation to mean 'treat everyone's comments as though they were made by a sane intelligent person who may .or may have been having an off day". What kind of guideline is that?

The charitable version would have been "treat everyone's comments as though they were made by someone sane and intelligent at the time".

Replies from: satt
comment by satt · 2015-08-19T02:30:22.785Z · LW(p) · GW(p)

(I'm giving myself half a point for anticipating that someone might reckon I was being uncharitable.)

You have uncharutably interpreted my formulation to mean 'treat everyone's comments as though they were made by a sane intelligent person who may .or may have been having an off day". What kind of guideline is that?

A realistic one.

The charitable version would have been "treat everyone's comments as though they were made by someone sane and intelligent at the time".

The thing is, that version actually sounds less charitable to me than my interpretation. Why? Well, I see two reasonable ways to interpret your latest formulation.

The first is to interpret "sane and intelligent" as I normally would, as a property of the person, in which case I don't understand how appending "at the time" makes a meaningful difference. My earlier point that sane, intelligent people say wrong things still applies. Whispering in my ear, "no, seriously, that person who just said the dumb-sounding thing is sane and intelligent right now" is just going to make me say, "right, I'm not denying that; as I said, sanity & intelligence aren't inconsistent with saying something dumb".

The second is to insist that "at the time" really is doing some semantic work here, indicating that I need to interpret "sane and intelligent" differently. But what alternative interpretation makes sense in this context? The obvious alternative is that "at the time" is drawing my attention to whatever wrong-sounding comment was just made. But then "sane and intelligent" is really just a camouflaged assertion of the comment's worthiness, rather than the claimant's, which reduces this formulation of the POC to "treat everyone's comments as though the comments are cogent".

The first interpretation is surely not your intended one because it's equivalent to one you've ruled out. So presumably I have to go with the second interpretation, but it strikes me as transparently uncharitable, because it sounds like a straw version of the POC ("oh, so I'm supposed to treat all comments as cogent, even if they sound idiotic?").

The third alternative, of course, is that I'm overlooking some third sensible interpretation of your latest formulation, but I don't see what it is; your comment's too pithy to point me in the right direction.

Replies from: TheAncientGeek, TheAncientGeek
comment by TheAncientGeek · 2015-08-25T09:39:12.743Z · LW(p) · GW(p)

But then "sane and intelligent" is really just a camouflaged assertion of the comment's worthiness, rather than the claimant's, which reduces this formulation of the POC to "treat everyone's comments as though the comments are cogent". [..] ("oh, so I'm supposed to treat all comments as cogent, even if they sound idiotic?"

Yep.

You have assumed that cannot be the correct interpretation of the PoC, without saying why. In light of your other comments, it could well be that you are assuming that the PoC can only be true by correspondence to reality or false, by lack of correspondence. But norms, guidelines, heurisitics, advice, lie on an orthogonal axis to true/false: they are guides to action, not passive reflections. Their equivalent of the true/false axis are the Works/Does Not Work axis. So would adoption of the PoC work as way of understanding people, and calibrating your confidence levels?...that is the question.

Replies from: satt
comment by satt · 2015-08-27T01:52:20.743Z · LW(p) · GW(p)

But norms, guidelines, heurisitics, advice, lie on an orthogonal axis to true/false: they are guides to action, not passive reflections. Their equivalent of the true/false axis are the Works/Does Not Work axis. So would adoption of the PoC work as way of understanding people, and calibrating your confidence levels?...that is the question.

OK, but that's not an adequate basis for recommending a given norm/guideline/heuristic. One has to at least sketch an answer to the question, drawing on evidence and/or argument (as RobinZ sought to).

You have assumed that cannot be the correct interpretation of the PoC, without saying why.

Well, because it's hard for me to believe you really believe that interpretation and understand it in the same way I would naturally operationalize it: namely, noticing and throwing away any initial suspicion I have that a comment's wrong, and then forcing myself to pretend the comment must be correct in some obscure way.

As soon as I imagine applying that procedure to a concrete case, I cringe at how patently silly & unhelpful it seems. Here's a recent-ish, specific example of me expressing disagreement with a statement I immediately suspected was incorrect.

What specifically would I have done if I'd treated the seemingly patently wrong comment as cogent instead? Read the comment, thought "that can't be right", then shaken my head and decided, "no, let's say that is right", and then...? Upvoted the comment? Trusted but verified (i.e. not actually treated the comment as cogent)? Replied with "I presume this comment is correct, great job"? Surely these are not courses of action you mean to recommend (the first & third because they actively support misinformation, the second because I expect you'd find it insufficiently charitable). Surely I am being uncharitable in operationalizing your recommendation this way...even though that does seem to me the most literal, straightforward operationalization open to me. Surely I misunderstand you. That's why I assumed "that cannot be the correct interpretation" of your POC.

Replies from: CCC, TheAncientGeek
comment by CCC · 2015-08-27T09:42:38.160Z · LW(p) · GW(p)

Well, because it's hard for me to believe you really believe that interpretation and understand it in the same way I would naturally operationalize it: namely, noticing and throwing away any initial suspicion I have that a comment's wrong, and then forcing myself to pretend the comment must be correct in some obscure way.

If I may step in at this point; "cogent" does not mean "true". The principle of charity (as I understand it) merely recommends treating any commenter as reasonably sane and intelligent. This does not mean he can't be wrong - he may be misinformed, he may have made a minor error in reasoning, he may simply not know as much about the subject as you do. Alternatively, you may be misinformed, or have made a minor error in reasoning, or not know as much about the subject as the other commenter...

So the correct course of action then, in my opinion, is to find the source of error and to be polite about it. The example post you linked to was a great example - you provided statistics, backed them up, and linked to your sources. You weren't rude about it, you simply stated facts. As far as I could see, you treated RomeoStevens as sane, intelligent, and simply lacking in certain pieces of pertinent historical knowledge - which you have now provided.

(As to what RomeoStevens said - it was cogent. That is to say, it was pertinent and relevant to the conversation at the time. That it was wrong does not change the fact that it was cogent; if it had been right it would have been a worthwhile point to make.)

Replies from: satt, TheAncientGeek
comment by satt · 2015-08-27T22:27:47.609Z · LW(p) · GW(p)

If I may step in at this point; "cogent" does not mean "true".

Yes, and were I asked to give synonyms for "cogent", I'd probably say "compelling" or "convincing" [edit: rather than "true"]. But an empirical claim is only compelling or convincing (and hence may only be cogent) if I have grounds for believing it very likely true. Hence "treat all comments as cogent, even if they sound idiotic" translates [edit: for empirical comments, at least] to "treat all comments as if very likely true, even if they sound idiotic".

Now you mention the issue of relevance, I think that, yeah, I agree that relevance is part of the definition of "cogent", but I also reckon that relevance is only a necessary condition for cogency, not a sufficient one. And so...

As to what RomeoStevens said - it was cogent. That is to say, it was pertinent and relevant to the conversation at the time.

...I have to push back here. While pertinent, the comment was not only wrong but (to me) obviously very likely wrong, and RomeoStevens gave no evidence for it. So I found it unreasonable, unconvincing, and unpersuasive — the opposite of dictionary definitions of "cogent". Pertinence & relevance are only a subset of cogency.

The principle of charity (as I understand it) merely recommends treating any commenter as reasonably sane and intelligent. This does not mean he can't be wrong - he may be misinformed, he may have made a minor error in reasoning, he may simply not know as much about the subject as you do.

That's why I wrote that that version of the POC strikes me as watered down; someone being "reasonably sane and intelligent" is totally consistent with their just having made a trivial blunder, and is (in my experience) only weak evidence that they haven't just made a trivial blunder, so "treat commenters as reasonably sane and intelligent" dissolves into "treat commenters pretty much as I'd treat anyone".

Replies from: CCC
comment by CCC · 2015-08-28T08:08:21.252Z · LW(p) · GW(p)

Hence "treat all comments as cogent, even if they sound idiotic" translates [edit: for empirical comments, at least] to "treat all comments as if very likely true, even if they sound idiotic".

Then "cogent" was probably the wrong word to use.

I'd need a word that means pertinent, relevant, and believed to have been most likely true (or at least useful to say) by the person who said it; but not necessarily actually true.

While pertinent, the comment was not only wrong but (to me) obviously very likely wrong, and RomeoStevens gave no evidence for it. So I found it unreasonable, unconvincing, and unpersuasive — the opposite of dictionary definitions of "cogent". Pertinence & relevance are only a subset of cogency.

Okay, I appear to have been using a different definition (see definition two).

I think at this point, so as not to get stuck on semantics, we should probably taboo the word 'cogent'.

(Having said that, I do agree anyone with access to the statistics you quoted would most likely find RomeoSteven's comments unreasonable, unconvincing and unpersuasive).

so "treat commenters as reasonably sane and intelligent" dissolves into "treat commenters pretty much as I'd treat anyone".

Then you may very well be effectively applying the principle already. Looking at your reply to RomeoStevens supports this assertion.

Replies from: satt
comment by satt · 2015-08-29T12:06:11.197Z · LW(p) · GW(p)

Then "cogent" was probably the wrong word to use.

TheAncientGeek assented to that choice of word, so I stuck with it. His conception of the POC might well be different from yours and everyone else's (which is a reason I'm trying to pin down precisely what TheAncientGeek means).

Okay, I appear to have been using a different definition (see definition two).

Fair enough, I was checking different dictionaries (and I've hitherto never noticed other people using "cogent" for "pertinent").

Then you may very well be effectively applying the principle already. Looking at your reply to RomeoStevens supports this assertion.

Maybe, though I'm confused here by TheAncientGeek saying in one breath that I applied the POC to RomeoStevens, but then agreeing ("Thats exactly what I mean.") in the next breath with a definition of the POC that implies I didn't apply the POC to RomeoStevens.

Replies from: CCC
comment by CCC · 2015-08-31T09:18:07.771Z · LW(p) · GW(p)

I think that you and I are almost entirely in agreement, then. (Not sure about TheAncientGeek).

Maybe, though I'm confused here by TheAncientGeek saying in one breath that I applied the POC to RomeoStevens, but then agreeing ("Thats exactly what I mean.") in the next breath with a definition of the POC that implies I didn't apply the POC to RomeoStevens.

I think you're dealing with double-illusion-of-transparency issues here. He gave you a definition ("treat everyone's comments as though they were made by someone sane and intelligent at the time") by which he meant some very specific concept which he best approximated by that phrase (call this Concept A). You then considered this phrase, and mapped it to a similar-but-not-the-same concept (Concept B) which you defined and tried to point out a shortcoming in ("namely, noticing and throwing away any initial suspicion I have that a comment's wrong, and then forcing myself to pretend the comment must be correct in some obscure way.").

Now, TheAncientGeek is looking at your words (describing Concept B) and reading into them the very similar Concept A; where your post in response to RomeoStevens satisfies Concept A but not Concept B.

Nailing down the difference between A and B will be extremely tricky and will probably require both of you to describe your concepts in different words several times. (The English language is a remarkably lossy means of communication).

Replies from: satt
comment by satt · 2015-08-31T13:10:52.953Z · LW(p) · GW(p)

Your diagnosis sounds all too likely. I'd hoped to minimize the risk of this kind of thing by concretizing and focusing on a specific, publicly-observable example, but that might not have helped.

comment by TheAncientGeek · 2015-08-27T15:01:28.012Z · LW(p) · GW(p)

Yes, that was an example of PoC, because satt assumed RomeoStevens had failed to look up the figures, rather than insanely believing that 120,000ish < 500ish.

comment by TheAncientGeek · 2015-08-27T15:09:35.117Z · LW(p) · GW(p)

But norms, guidelines, heurisitics, advice, lie on an orthogonal axis to true/false: they are guides to action, not passive reflections. Their equivalent of the true/false axis are the Works/Does Not Work axis. So would adoption of the PoC work as way of understanding people, and calibrating your confidence levels?...that is the question.

OK, but that's not an adequate basis for recommending a given norm/guideline/heuristic. One has to at least sketch an answer to the question, drawing on evidence and/or argument

Yes, but that's beside the original point. What you call a realistic guideline doesnt work as a guideline at all, and therefore isnt a a charitable interpretation of the PoC.

Justifying that PoC as something that works at what it is supposed to do, is a question that can be answered, but it is a separate question.

namely, noticing and throwing away any initial suspicion I have that a comment's wrong, and then forcing myself to pretend the comment must be correct in some obscure way.

Thats exactly what I mean.

What specifically would I have done if I'd treated the seemingly patently wrong comment as cogent instead?

Cogent doesn't mean right. You actually succeeded in treating it as wrong for sane reasons, ie failure to check data.

Replies from: satt
comment by satt · 2015-08-27T23:33:30.415Z · LW(p) · GW(p)

But norms, guidelines, heurisitics, advice, lie on an orthogonal axis to true/false: they are guides to action, not passive reflections. [...]

OK, but [...]

Yes, but that's beside the original point.

You brought it up!

What you call a realistic guideline doesnt work as a guideline at all, and therefore isnt a a charitable interpretation of the PoC.

I continue to think that the version I called realistic is no less workable than your version.

Justifying that PoC as something that works at what it is supposed to do, is a question that can be answered, but it is a separate question.

Again, it's a question you introduced. (And labelled "the question".) But I'm content to put it aside.

noticing and throwing away any initial suspicion I have that a comment's wrong, and then forcing myself to pretend the comment must be correct in some obscure way.

Thats exactly what I mean.

But surely it isn't. Just 8 minutes earlier you wrote that a case where I did the opposite was an "example of PoC".

Cogent doesn't mean right.

See my response to CCC.

comment by TheAncientGeek · 2015-08-24T21:34:12.110Z · LW(p) · GW(p)

A realistic one.

But not one that tells you unambiguously what to do, ie not a usable guideline at all.

There's a lot of complaint about this heuristic along the lines that it doesn't guarantee perfect results...ie, its a heuristic

And now there is the complaint that its not realistic, it doesn't reflect reality.

Ideal rationalists can stop reading now.

Everybody else: you're biased. Specifically, overconfident,. Overconfidence makes people overestimate their ability to understand what people are saying, and underestimate the rationality of others. The PoC is a heuristic which corrects those. As a heuristic, an approximate method, it i is based on the principle that overshooting the amount of sense people are making is better than undershooting. Overshooting would be a problem, if there were some goldilocks alternative, some way of getting things exactly right. There isn't. The voice in your head that tells you you are doing just fine its the voice of your bias.

Replies from: satt
comment by satt · 2015-08-27T01:22:50.461Z · LW(p) · GW(p)

But not one that tells you unambiguously what to do, ie not a usable guideline at all.

I don't see how this applies any more to the "may .or may have been having an off day"" version than it does to your original. They're about as vague as each other.

Overconfidence makes people overestimate their ability to understand what people are saying, and underestimate the rationality of others. The PoC is a heuristic which corrects those. As a heuristic, an approximate method, it i is based on the principle that overshooting the amount of sense people are making is better than undershooting.

Understood. But it's not obvious to me that "the principle" is correct, nor is it obvious that a sufficiently strong POC is better than my more usual approach of expressing disagreement and/or asking sceptical questions (if I care enough to respond in the first place).

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-08-27T15:16:50.044Z · LW(p) · GW(p)

But not one that tells you unambiguously what to do, ie not a usable guideline at all.

I don't see how this applies any more to the "may .or may have been having an off day"" version than it does to your original. They're about as vague as each other.

Mine implies a heuristic of "make repeated attempts at re-intepreting the comment using different background assumptions". What does yours imply?

Understood. But it's not obvious to me that "the principle" is correct,

As I have explained, it provides its own evidence.

nor is it obvious that a sufficiently strong POC is better than my more usual approach of expressing disagreement and/or asking sceptical questions (if I care enough to respond in the first place).

Neither of those is much good if interpreting someone who died 100 years ago.

Replies from: satt
comment by satt · 2015-08-27T23:18:11.323Z · LW(p) · GW(p)

Mine implies a heuristic of "make repeated attempts at re-intepreting the comment using different background assumptions".

I don't see how "treat everyone's comments as though they were made by a sane , intelligent, person" entails that without extra background assumptions. And I expect that once those extra assumptions are spelled out, the "may .or may have been having an off day" version will imply the same action(s) as your original version.

As I have explained, it provides its own evidence.

Well, when I've disagreed with people in discussions, my own experience has been that behaving according to my baseline impression of how much sense they're making gets me closer to understanding than consciously inflating my impression of how much sense they're making.

Neither of those is much good if interpreting someone who died 100 years ago.

A fair point, but one of minimal practical import. Almost all of the disagreements which confront me in my life are disagreements with live people.

comment by Lumifer · 2014-04-24T15:22:54.285Z · LW(p) · GW(p)

it means "treat everyone's comments as though they were made by a sane , intelligent, person".

I don't like this rule. My approach is simpler: attempt to understand what the person means. This does not require me to treat him as sane or intelligent.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-04-24T17:09:36.714Z · LW(p) · GW(p)

How do you know how many mistakes you are or aren't making?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-08-17T18:54:20.635Z · LW(p) · GW(p)

The PoC is a way of breaking down "understand what the other person says" into smaller steps, not .something entirely different, Treating your own mental processes as a black box that always delivers the right answer is a great way to stay in the grip of bias.

comment by RobinZ · 2014-04-23T21:33:46.132Z · LW(p) · GW(p)

The prior comment leads directly into this one: upon what grounds do I assert that an inexpensive test exists to change my beliefs about the rationality of an unfamiliar discussant? I realize that it is not true in the general case that the plural of anecdote is data, and much the following lacks citations, but:

  • Many people raised to believe that evolution is false because it contradicts their religion change their minds in their first college biology class. (I can't attest to this from personal experience - this is something I've seen frequently reported or alluded to via blogs like Slacktivist.)
  • An intelligent, well-meaning, LessWrongian fellow was (hopefully-)almost driven out of my local Less Wrong meetup in no small part because a number of prominent members accused him of (essentially) being a troll. In the course of a few hours conversation between myself and a couple others focused on figuring out what he actually meant, I was able to determine that (a) he misunderstood the subject of conversation he had entered, (b) he was unskilled at elaborating in a way that clarified his meaning when confusion occurred, and (c) he was an intelligent, well-meaning, LessWrongian fellow whose participation in future meetups I would value.
  • I am unable to provide the details of this particular example (it was relayed to me in confidence), but an acquaintance of mine was a member of a group which was attempting to resolve an elementary technical challenge - roughly the equivalent of setting up a target-shooting range with a safe backstop in terms of training required. A proposal was made that was obviously unsatisfactory - the equivalent of proposing that the targets be laid on the ground and everyone shoot straight down from a second-story window - and my acquaintance's objection to it on common-sense grounds was treated with a response equivalent to, "You're Japanese, what would you know about firearms?" (In point of fact, while no metaphorical gunsmith, my acquaintance's knowledge was easily sufficient to teach a Boy Scout merit badge class.)
  • In my first experience on what was then known as the Internet Infidels Discussion Board, my propensity to ask "what do you mean by x" sufficed to transform a frustrated, impatient discussant into a cheerful, enthusiastic one - and simultaneously demonstrate that said discussant's arguments were worthless in a way which made it easy to close the argument.

In other words, I do not often see the case in which performing the tests implied by the principle of charity - e.g. "are you saying [paraphrase]?" - are wasteful, and I frequently see cases where failing to do so has been.

Replies from: Lumifer
comment by Lumifer · 2014-04-24T15:19:32.747Z · LW(p) · GW(p)

What you are talking about doesn't fall under the principle of charity (in my interpretation of it). It falls under the very general rubric of "don't be stupid yourself".

In particular, considering that the speaker expresses his view within a framework which is different from your default framework is not an application of the principle of charity -- it's an application of the principle "don't be stupid, of course people talk within their frameworks, not within your framework".

Replies from: RobinZ, RobinZ
comment by RobinZ · 2014-04-24T16:48:53.012Z · LW(p) · GW(p)

I might be arguing for something different than your principle of charity. What I am arguing for - and I realize now that I haven't actually explained a procedure, just motivations for one - is along the following lines:

When somebody says something prima facie wrong, there are several possibilities, both regarding their intended meaning:

  • They may have meant exactly what you heard.
  • They may have meant something else, but worded it poorly.
  • They may have been engaging in some rhetorical maneuver or joke.
  • They may have been deceiving themselves.
  • They may have been intentionally trolling.
  • They may have been lying.

...and your ability to infer such:

  • Their remark may resemble some reasonable assertion, worded badly.
  • Their remark may be explicable as ironic or joking in some sense.
  • Their remark may conform to some plausible bias of reasoning.
  • Their remark may seem like a lie they would find useful.*
  • Their remark may represent an attempt to irritate you for their own pleasure.*
  • Their remark may simply be stupid.
  • Their remark may allow more than one of the above interpretations.

What my interpretation of the principle of charity suggests as an elementary course of action in this situation is, with an appropriate degree of polite confusion, to ask for clarification or elaboration, and to accompany this request with paraphrases of the most likely interpretations you can identify of their remarks excluding the ones I marked with asterisks.

Depending on their actual intent, this has a good chance of making them:

  • Elucidate their reasoning behind the unbelievable remark (or admit to being unable to do so);
  • Correct their misstatement (or your misinterpretation - the difference is irrelevant);
  • Admit to their failed humor;
  • Admit to their being unable to support their assertion, back off from it, or sputter incoherently;
  • Grow impatient at your failure to rise to their goading and give up; or
  • Back off from (or admit to, or be proven guilty of) their now-unsupportable deception.

In the first three or four cases, you have managed to advance the conversation with a well-meaning discussant without insult; in the latter two or three, you have thwarted the goals of an ill-intentioned one - especially, in the last case, because you haven't allowed them the option of distracting everyone from your refutations by claiming you insulted them. (Even if they do so claim, it will be obvious that they have no just cause to be.)

I say this falls under the principle of charity because it involves (a) granting them, at least rhetorically, the best possible motives, and (b) giving them enough of your time and attention to seek engagement with their meaning, not just a lazy gloss of their words.

Minor formatting edit.

comment by RobinZ · 2014-06-10T15:27:47.687Z · LW(p) · GW(p)

Belatedly: I recently discovered that in 2011 I posted a link to an essay on debating charitably by pdf23ds a.k.a. Chris Capel - this is MichaelBishop's summary and this is a repost of the text (the original site went down some time ago). I recall endorsing Capel's essay unreservedly last time I read it; I would be glad to discuss the essay, my prior comments, or any differences that exist between the two if you wish.

comment by RobinZ · 2014-04-24T15:00:27.102Z · LW(p) · GW(p)

A small addendum, that I realized I omitted from my prior arguments in favor of the principle of charity:

Because I make a habit of asking for clarification when I don't understand, offering clarification when not understood, and preferring "I don't agree with your assertion" to "you are being stupid", people are happier to talk to me. Among the costs of always responding to what people say instead of your best understanding of what they mean - especially if you are quick to dismiss people when their statements are flawed - is that talking to you becomes costly: I have to word my statements precisely to ensure that I have not said something I do not mean, meant something I did not say, or made claims you will demand support for without support. If, on the other hand, I am confident that you will gladly allow me to correct my errors of presentation, I can simply speak, and fix anything I say wrong as it comes up.

Which, in turn, means that I can learn from a lot of people who would not want to speak to me otherwise.

Replies from: Lumifer
comment by Lumifer · 2014-04-24T15:37:21.994Z · LW(p) · GW(p)

responding to what people say instead of your best understanding of what they mean

Again: I completely agree that you should make your best effort to understand what other people actually mean. I do not call this charity -- it sounds like SOP and "just don't be an idiot yourself" to me.

comment by RobinZ · 2014-04-23T19:59:58.034Z · LW(p) · GW(p)

I don't see it as self-evident. Or, more precisely, in some situations it is, and in other situations it is not.

You're right: it's not self-evident. I'll go ahead and post a followup comment discussing what sort of evidential support the assertion has.

As to the first point, you are basically saying I should ignore evidence (or, rather, shift the evidence into the prior and refuse to estimate the posterior). That doesn't help me reliably distinguish anything either.

My usage of the terms "prior" and "posterior" was obviously mistaken. What I wanted to communicate with those terms was communicated by the analogies to the dice cup and to the scientific theory: it's perfectly possible for two hypotheses to have the same present probability but different expectations of future change to that probability. I have high confidence that an inexpensive test - lifting the dice cup - will change my beliefs about the value of the die roll by many orders of magnitude, and low confidence that any comparable test exists to affect my confidence regarding the scientific theory.

Replies from: Lumifer
comment by Lumifer · 2014-04-23T20:30:33.873Z · LW(p) · GW(p)

What I wanted to communicate with those terms was communicated by the analogies to the dice cup and to the scientific theory: it's perfectly possible for two hypotheses to have the same present probability but different expectations of future change to that probability.

I think you are talking about what's in local parlance is called a "weak prior" vs a "strong prior". Bayesian updating involves assigning relative importance the the prior and to the evidence. A weak prior is easily changed by even not very significant evidence. On the other hand, it takes a lot of solid evidence to move a strong prior.

In this terminology, your pre-roll estimation of the probability of double sixes is a weak prior -- the evidence of an actual roll will totally overwhelm it. But your estimation of the correctness of the modern evolutionary theory is a strong prior -- it will take much convincing evidence to persuade you that the theory is not correct after all.

Of course, the posterior of a previous update becomes the prior of the next update.

Using this language, then, you are saying that prima facie evidence of someone's stupidity should be a minor update to the strong prior that she is actually a smart, reasonable, and coherent human being.

And I don't see why this should be so.

Replies from: RobinZ, V_V, TheAncientGeek
comment by RobinZ · 2014-04-23T21:25:55.051Z · LW(p) · GW(p)

Using this language, then, you are saying that prima facie evidence of someone's stupidity should be a minor update to the strong prior that she is actually a smart, reasonable, and coherent human being.

Oh, dear - that's not what I meant at all. I meant that - absent a strong prior - the utterance of a prima facie absurdity should not create a strong prior that the speaker is stupid, unreasonable, or incoherent. It's entirely possible that ten minutes of conversation will suffice to make a strong prior out of this weaker one - there's someone arguing for dualism on a webcomic forum I (in)frequent along the same lines as Chalmers "hard problem of consciousness", and it took less than ten posts to establish pretty confidently that the same refutations would apply - but as the history of DIPS (defense-independent pitching statistics) shows, it's entirely possible for an idea to be as correct as "the earth is a sphere, not a plane" and nevertheless be taken as prima facie absurd.

(As the metaphor implies, DIPS is not quite correct, but it would be more accurate to describe its successors as "fixing DIPS" than as "showing that DIPS was completely wrongheaded".)

Replies from: Lumifer
comment by Lumifer · 2014-04-24T15:15:01.050Z · LW(p) · GW(p)

I meant that - absent a strong prior - the utterance of a prima facie absurdity should not create a strong prior that the speaker is stupid, unreasonable, or incoherent.

Oh, I agree with that.

What I am saying is that evidence of stupidity should lead you to raise your estimates of the probability that the speaker is stupid. The principle of charity should not prevent that from happening. Of course evidence of stupidity should not make you close the case, declare someone irretrievably stupid, and stop considering any further evidence.

As an aside, I treat how a person argues as a much better indicator of stupidity than what he argues. YMMV, of course.

Replies from: RobinZ
comment by RobinZ · 2014-04-24T15:43:54.134Z · LW(p) · GW(p)

What I am saying is that evidence of stupidity should lead you to raise your estimates of the probability that the speaker is stupid.

...in the context during which they exhibited the behavior which generated said evidence, of course. In broader contexts, or other contexts? To a much lesser extent, and not (usually) strongly in the strong-prior sense, but again, yes. That you should always be capable of considering further evidence is - I am glad to say - so universally accepted a proposition in this forum that I do not bother to enunciate it, but I take no issue with drawing conclusions from a sufficient body of evidence.

Come to think, you might be amused by this fictional dialogue about a mendacious former politician, illustrating the ridiculousness of conflating "never assume that someone is arguing in bad faith" and "never assert that someone is arguing in bad faith". (The author also posted a sequel, if you enjoy the first.)

As an aside, I treat how a person argues as a much better indicator of stupidity than what he argues. YMMV, of course.

I'm afraid that I would have about as much luck barking like a duck as enunciating how I evaluate the intelligence (or reasonableness, or honesty, or...) of those I converse with. YMMV, indeed.

comment by V_V · 2014-04-23T20:47:45.415Z · LW(p) · GW(p)

Using this language, then, you are saying that prima facie evidence of someone's stupidity should be a minor update to the strong prior that she is actually a smart, reasonable, and coherent human being. And I don't see why this should be so.

People tend to update too much in these circumstances: Fundamental attribution error

Replies from: Lumifer
comment by Lumifer · 2014-04-23T21:09:32.185Z · LW(p) · GW(p)

The fundamental attribution error is about underestimating the importance of external drivers (the particular situation, random chance, etc.) and overestimating the importance of internal factors (personality, beliefs, etc.) as an explanation for observed actions.

If a person in a discussion is spewing nonsense, it is rare that external factors are making her do it (other than a variety of mind-altering chemicals). The indicators of stupidity are NOT what position a person argues or how much knowledge about the subject does she has -- it's how she does it. And inability e.g. to follow basic logic is hard to attribute to external factors.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-04-23T21:33:48.643Z · LW(p) · GW(p)

This discussion has got badly derailed. You are taking it that there is some robust fact about someones lack of lrationality or intelligence which may or may not be explained by internal or external factors.

The point is that you cannot make a reliable judgement about someone's rationality or intelligence unless you have understood that they are saying,....and you cannot reliably understand what they ares saying unl ess you treat it as if it were the product of a rational and intelligent person. You can go to "stupid"when all attempts have failed, but not before.

Replies from: Lumifer
comment by Lumifer · 2014-04-24T15:20:42.805Z · LW(p) · GW(p)

you cannot reliably understand what they ares saying unless you treat it as if it were the product of a rational and intelligent person.

I disagree, I don't think this is true.

Replies from: None, RobinZ
comment by [deleted] · 2014-04-24T16:35:47.525Z · LW(p) · GW(p)

I think it's true, on roughly these grounds: taking yourself to understand what someone is saying entails thinking that almost all of their beliefs (I mean 'belief' in the broad sense, so as to include my beliefs about the colors of objects in the room) are true. The reason is that unless you assume almost all of a person's (relevant) beliefs are true, the possibility space for judgements about what they mean gets very big, very fast. So if 'generally understanding what someone is telling you' means having a fairly limited possibility space, you only get this on the assumption that the person talking to you has mostly true beliefs. This, of course, doesn't mean they have to be rational in the LW sense, or even very intelligent. The most stupid and irrational (in the LW sense) of us still have mostly true beliefs.

I guess the trick is to imagine what it would be to talk to someone who you thought had on the whole false beliefs. Suppose they said 'pass me the hammer'. What do you think they meant by that? Assuming they have mostly or all false beliefs relevant to the utterance, they don't know what a hammer is or what 'passing' involves. They don't know anything about what's in the room, or who you are, or what you are, or even if they took themselves to be talking to you, or talking at all. The possibility space for what they took themselves to be saying is too large to manage, much larger than, for example, the possibility space including all and only every utterance and thought that's ever been had by anyone. We can say things like 'they may have thought they were talking about cats or black holes or triangles' but even that assumes vastly more truth and reason in the person that we've assumed we can anticipate.

Replies from: Lumifer
comment by Lumifer · 2014-04-24T16:45:07.315Z · LW(p) · GW(p)

Generally speaking, understanding what a person means implies reconstructing their framework of meaning and reference that exists in their mind as the context to what they said.

Reconstructing such a framework does NOT require that you consider it (or the whole person) sane or rational.

Replies from: None
comment by [deleted] · 2014-04-24T16:55:05.945Z · LW(p) · GW(p)

Reconstructing such a framework does NOT require that you consider it (or the whole person) sane or rational.

Well, there are two questions here: 1) is it in principle necessary to assume your interlocutors are sane and rational, and 2) is it as a matter of practical necessity a fact that we always do assume our interlocutors are sane and rational. I'm not sure about the first one, but I am pretty sure about the second: the possibility space for reconstructing the meaning of someone speaking to you is only manageable if you assume that they're broadly sane, rational, and have mostly true beliefs. I'd be interested to know which of these you're arguing about.

Also, we should probably taboo 'sane' and 'rational'. People around here have a tendency to use these words in an exaggerated way to mean that someone has a kind of specific training in probability theory, statistics, biases, etc. Obviously people who have none of these things, like people living thousands of years ago, were sane and rational in the conventional sense of these terms, and they had mostly true beliefs even by any standard we would apply today.

Replies from: Lumifer
comment by Lumifer · 2014-04-24T17:08:42.683Z · LW(p) · GW(p)

The answers to your questions are no and no.

I am pretty sure about the second: the possibility space for reconstructing the meaning of someone speaking to you is only manageable if you assume that they're broadly sane, rational, and have mostly true beliefs.

I don't think so. Two counter-examples:

I can discuss fine points of theology with someone without believing in God. For example, I can understand the meaning of the phrase "Jesus' self-sacrifice washes away the original sin" without accepting that Christianity is "mostly true" or "rational".

Consider a psychotherapist talking to a patient, let's say a delusional one. Understanding the delusion does not require the psychotherapist to believe that the patient is sane.

Replies from: None, TheAncientGeek
comment by [deleted] · 2014-04-24T18:52:57.280Z · LW(p) · GW(p)

I can discuss fine points of theology with someone without believing in God. For example, I can understand the meaning of the phrase "Jesus' self-sacrifice washes away the original sin" without accepting that Christianity is "mostly true" or "rational".

You're not being imaginative enough: you're thinking about someone with almost all true beliefs (including true beliefs about what Christians tend to say), and a couple of sort of stand out false beliefs about how the universe works as a whole. I want you to imagine talking to someone with mostly false beliefs about the subject at hand. So you can't assume that by 'Jesus' self-sacrifice washes away the original sin' that they're talking about anything you know anything about because you can't assume they are connecting with any theology you've ever heard of. Or even that they're talking about theology. Or even objects or events in any sense you're familiar with.

Consider a psychotherapist talking to a patient, let's say a delusional one. Understanding the delusion does not require the psychotherapist to believe that the patient is sane.

I think, again, delusional people are remarkable for having some unaccountably false beliefs, not for having mostly false beliefs. People with mostly false beliefs, I think, wouldn't be recognizable even as being conscious or aware of their surroundings (because they're not!).

Replies from: Lumifer
comment by Lumifer · 2014-04-25T00:41:10.648Z · LW(p) · GW(p)

People with mostly false beliefs, I think, wouldn't be recognizable even as being conscious or aware of their surroundings (because they're not!).

So why are we talking about them, then?

Replies from: None
comment by [deleted] · 2014-04-25T02:00:14.527Z · LW(p) · GW(p)

Well, my point is that as a matter of course, you assume everyone you talk to has mostly true beliefs, and for the most part thinks rationally. We're talking about 'people' with mostly or all false beliefs just to show that we don't have any experience with such creatures.

Bigger picture: the principle of charity, that is the assumption that whoever you are talking to is mostly right and mostly rational, isn't something you ought to hold, it's something you have no choice but to hold. The principle of charity is a precondition on understanding anyone at all, even recognizing that they have a mind.

Replies from: Jiro
comment by Jiro · 2014-04-25T05:52:43.497Z · LW(p) · GW(p)

People will have mostly true beliefs, but they might not have true beliefs in the areas under concern. For obvious reasons, people's irrationality is likely to be disproportionately present in the beliefs with which they disagree with others. So the fact that you need to be charitable in assuming people have mostly true beliefs may not be practically useful--I'm sure a creationist rationally thinks water is wet, but if I'm arguing with him, that subject probably won't come up as much as creationism.

Replies from: None
comment by [deleted] · 2014-04-25T15:02:38.915Z · LW(p) · GW(p)

That's true, but I feel like a classic LW point can be made here: suppose it turns out some people can do magic. That might seem like a big change, but in fact magic will then just be subject to the same empirical investigation as everything else, and ultimately the same integration into physical theory the same as everything else.

So while I agree with you that when we specify a topic, we can have broader disagreement, that disagreement is built on and made possible by very general agreement about everything else. Beliefs are holistic, not atomic, and we can't partition them off while making any sense of them. We're never just talking about some specific subject matter, but rather emphasizing some subject matter on the background of all our other beliefs (most of which must be true).

The thought, in short, is that beliefs are of a nature to be true, in the way dogs naturally have four legs. Some don't, because something went wrong, but we can only understand the defect in these by having the basic nature of beliefs, namely truth, in the background.

Replies from: ChristianKl
comment by ChristianKl · 2014-04-25T15:51:00.121Z · LW(p) · GW(p)

That might seem like a big change, but in fact magic will then just be subject to the same empirical investigation as everything else, and ultimately the same integration into physical theory the same as everything else.

That could be true but doesn't have to be true. Our ontological assumptions might also turn out to be mistaken.

To quote Eliezer:

MORPHEUS: Where did you hear about the laws of thermodynamics, Neo?

NEO: Anyone who's made it past one science class in high school ought to know about the laws of thermodynamics!

MORPHEUS: Where did you go to high school, Neo?

(Pause.)

NEO: ...in the Matrix.

MORPHEUS: The machines tell elegant lies.

(Pause.)

NEO (in a small voice): Could I please have a real physics textbook?

MORPHEUS: There is no such thing, Neo. The universe doesn't run on math.

Replies from: None
comment by [deleted] · 2014-04-25T16:26:49.830Z · LW(p) · GW(p)

That could be true but doesn't have to be true. Our ontological assumptions might also turn out to be mistaken.

True, and a discovery like that might require us to make some pretty fundamental changes. But I don't think Morpheus could be right about the universe's relation to math. No universe, I take it, 'runs' on math in anything but the loosest figurative sense. The universe we live in is subject to mathematical analysis, and what reason could we have for thinking any universe could fail to be so? I can't say for certain, of course, that every possible universe must run on math, but I feel safe in claiming that we've never imagined a universe, in fiction or through something like religion, which would fail to run on math.

More broadly speaking, anything that is going to be knowable at all is going to be rational and subject to rational understanding. Even if someone has some very false beliefs, their beliefs are false not just jibber-jabber (and if they are just jibber-jabber then you're not talking to a person). Even false beliefs are going to have a rational structure.

Replies from: Eugine_Nier, ChristianKl, TheAncientGeek, Lumifer
comment by Eugine_Nier · 2014-05-03T04:04:50.937Z · LW(p) · GW(p)

I can't say for certain, of course, that every possible universe must run on math, but I feel safe in claiming that we've never imagined a universe, in fiction or through something like religion, which would fail to run on math.

That is a fact about you, not a fact about the universe. Nobody could imagine light being both a particle and a wave, for example, until their study of nature forced them to.

Replies from: Jiro, None
comment by Jiro · 2014-05-03T15:10:40.828Z · LW(p) · GW(p)

People could imagine such a thing before studying nature showed they needed to; they just didn't. I think there's a difference between a concept that people only don't imagine, and a concept that people can't imagine. The latter may mean that the concept is incoherent or has an intrinsic flaw, which the former doesn't.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-05-03T19:55:07.967Z · LW(p) · GW(p)

People could imagine such a thing before studying nature showed they needed to; they just didn't. I think there's a difference between a concept that people only don't imagine, and a concept that people can't imagine.

In the interest of not having this discussion degenerate into an argument about what "could" means, I would like to point out that your and hen's only evidence that you couldn't imagine a world that doesn't run on math is that you haven't.

Replies from: private_messaging
comment by private_messaging · 2014-05-03T20:17:17.333Z · LW(p) · GW(p)

For one thing, "math" trivially happens to run on world, and corresponds to what happens when you have a chain of interactions. Specifically to how one chain of physical interactions (apples being eaten for example) combined with another that looks dissimilar (a binary adder) ends up with conclusion that apples were counted correctly, or how the difference in count between the two processes of counting (none) corresponds to another dissimilar process (the reasoning behind binary arithmetic).

As long as there's any correspondences at all between different physical processes, you'll be able to kind of imagine that world runs on world arranged differently, and so it would appear that world "runs on math".

If we were to discover some new laws of physics that were producing incalculable outcomes, we would just utilize those laws in some sort of computer and co-opt them as part of "math", substituting processes for equivalent processes. That's how we came up with math in the first place.

edit: to summarize, I think "the world runs on math" is a really confused way to look at how world relates to the practice of mathematics inside of it. I can perfectly well say that the world doesn't run on math any more than the radio waves are transmitted by mechanical aether made of gears, springs, and weights, and have exact same expectations about everything.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-05-04T13:56:09.731Z · LW(p) · GW(p)

"There is non trivial subset of maths whish describes physical law" might be better way of stating it

Replies from: private_messaging
comment by private_messaging · 2014-05-05T03:50:09.175Z · LW(p) · GW(p)

It seems to me that as long as there's anything that is describable in the loosest sense, that would be taken to be true.

I mean, look at this, some people believe literally that our universe is a "mathematical object", what ever that means (tegmarkery), and we haven't even got a candidate TOE that works.

edit: I think the issue is that Morpheus confuses "made of gears" with "predictable by gears". Time is not made of gears, and neither are astronomical objects, but a clock is very useful nonetheless.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-05-05T21:50:56.702Z · LW(p) · GW(p)

I don't see why "describable" would necessarily imply "describable mathematically". I can imagine a qualia only universe,and I can imagine the ability describe qualia. As things stand, there are a number of things that can't be described mathematically

Replies from: dthunt
comment by dthunt · 2014-05-05T22:00:43.531Z · LW(p) · GW(p)

Example?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-05-05T23:38:45.387Z · LW(p) · GW(p)

Qualia, the passage of time, symbol grounding..

comment by [deleted] · 2014-05-03T14:17:45.997Z · LW(p) · GW(p)

Absolutely, it's a fact about me, that's my point. I just also think it's a necessary fact.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-05-03T19:57:13.939Z · LW(p) · GW(p)

I just also think it's a necessary fact.

What's your evidence for this? Keep in mind that the history of science is full of people asserting that X has to be the case because they couldn't imagine the world being otherwise, only for subsequent discoveries to show that X is not in fact the case.

Replies from: army1987, Jiro, None
comment by A1987dM (army1987) · 2014-05-04T19:00:59.700Z · LW(p) · GW(p)

Name three (as people often say around here).

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-05-04T22:52:58.707Z · LW(p) · GW(p)

Well, the most famous (or infamous) is Kant's argument the space must be flat (in the Euclidean sense) because the human mind is incapable of imagining it to be otherwise.

Another example was Lucretius's argument against the theory that the earth is round: if the earth were round and things fell towards its center than in which direction would an object at the center fall?

Not to mention the standard argument against the universe having a beginning "what happened before it?"

Replies from: None
comment by [deleted] · 2014-05-04T23:36:08.975Z · LW(p) · GW(p)

I don't intend to bicker, I think your point is a good one independently of these examples. In any case, I don't think at least the first two of these examples of the phenomenon you're talking about.

Well, the most famous (or infamous) is Kant's argument the space must be flat (in the Euclidean sense) because the human mind is incapable of imagining it to be otherwise.

I think this comes up in the sequences as an example of the mind-projection fallacy, but that's not right. Kant did not take himself to be saying anything about the world outside the mind when he said that space was flat. He only took himself to be talking about the world as it appears to us. Space, so far as Kant was concerned, was part of the structure of perception, not the universe. So in the Critique of Pure Reason, he says:

...if we remove our own subject or even only the subjective constitution of the senses in general, then all constitution, all relations of objects in space and time, indeed space and time themselves would disappear, and as appearances they cannot exist in themselves, but only in us. What may be the case with objects in themselves and abstracted from all this receptivity of our sensibility remains entirely unknown to us. (A42/B59–60)

So Kant is pretty explicit that he's not making a claim about the world, but about the way we percieve it. Kant would very likely poke you in chest and say "No you're committing the mind-projection fallacy for thinking that space is even in the world, rather than just a form of perception. And don't tell me about the mind-projection fallacy anyway, I invented that whole move."

Another example was Lucretius's argument against the theory that the earth is round: if the earth were round and things fell towards its center than in which direction would an object at the center fall?

This also isn't an example, because the idea of a spherical world had in fact been imagined in detail by Plato (with whom Lucretius seems to be arguing), Aristotle, and many of Lucretius' contemporaries and predecessors. Lucretius' point couldn't have been that a round earth is unimaginable, but that it was inconsistent with an analysis of the motions of simple bodies in terms of up and down: you can't say that fire is of a nature to go up if up is entirely relative. Or I suppose, you can say that but you'd have to come up with a more complicated account of natures.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-05-12T00:41:52.917Z · LW(p) · GW(p)

Kant did not take himself to be saying anything about the world outside the mind when he said that space was flat. He only took himself to be talking about the world as it appears to us. Space, so far as Kant was concerned, was part of the structure of perception, not the universe.

And in particular he claimed that this showed it had to be Euclidean because humans couldn't imagine it otherwise. Well, we now know it's not Euclidean and people can imagine it that way (I suppose you could dispute this, but that gets into exactly what we mean by "imagine" and attempting to argue about other people's qualia).

Replies from: None
comment by [deleted] · 2014-05-12T02:31:11.984Z · LW(p) · GW(p)

And in particular he claimed that this showed it had to be Euclidean because humans couldn't imagine it otherwise.

No, he never says that. Feel free to cite something from Kant's writing, or the SEP or something. I may be wrong, but I just read though the Aesthetic again, and I couldn't find anything that would support your claim.

EDIT: I did find one passage that mentions imagination:

  1. Space then is a necessary representation a priori, which serves for the foundation of all external intuitions. We never can imagine or make representation to ourselves of the non-existence of space, though we may easily enough think that no objects are found in it.

I've edited my post accordingly, but my point remains the same. Notice that Kant does not mention the flatness of space, nor is it at all obvious that he's inferring anything from our inability to imagine the non-existence of space. END EDIT.

You gave Kant's views about space as an example of someone saying 'because we can't imagine it otherwise, the world must be such and such'. Kant never says this. What he says is that the principles of geometry are not derived simply from the analysis of terms, nor are they empirical. Kant is very, very, explicit...almost annoyingly repetitive, that he is not talking about the world, but about our perceptive faculties. And if indeed we cannot imagine x, that does seem to me to be a good basis from which to draw some conclusions about our perceptive faculties.

I have no idea what Kant would say about whether or not we can imagine non-Euclidian space (I have no idea myself if we can) but the matter is complicated because 'imagination' is a technical term in his philosophy. He thought space was an infinite Euclidian magnitude, but Euclidian geometry was the only game in town at the time.

Anyway he's not a good example. As I said before, I don't mean to dispute the point the example was meant to illustrate. I just wanted to point out that this is an incorrect view of Kant's claims about space. It's not really very important what he thought about space though.

comment by Jiro · 2014-05-04T02:24:29.617Z · LW(p) · GW(p)

There's a difference between "can't imagine" in a colloquial sense, and actual inability to imagine. There's also a difference between not being able to think of how something fits into our knowledge about the universe (for instance, not being able to come up with a mechanism or not being able to see how the evidence supports it) and not being able to imagine the thing itself.

There also aren't as many examples of this in the history of science as you probably think. Most of the examples that come to people's mind involve scientists versus noscientists.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-05-04T22:55:25.206Z · LW(p) · GW(p)

There also aren't as many examples of this in the history of science as you probably think.

See my reply to army above.

comment by [deleted] · 2014-05-03T20:29:39.746Z · LW(p) · GW(p)

Hold on now, you're pattern matching me. I said:

I can't say for certain, of course, that every possible universe must run on math, but I feel safe in claiming that we've never imagined a universe, in fiction or through something like religion, which would fail to run on math.

To which you replied that this is a fact about me, not the universe. But I explicitly say that its not a fact about the universe! My evidence for this is the only evidence that could be relevant: my experience with literature, science fiction, talking to people, etc.

Nor is it relevant that science is full of people that say that something has to be true because they can't imagine the world otherwise. Again, I'm not making a claim about the world, I'm making a claim about the way we have imagined, or now imagine the world to be. I would be very happy to be pointed toward a hypothetical universe that isn't subject to mathematical analysis and which contains thinking animals.

So before we go on, please tell me what you think I'm claiming? I don't wish to defend any opinions but my own.

Replies from: TheAncientGeek, Eugine_Nier
comment by TheAncientGeek · 2014-05-04T14:50:48.536Z · LW(p) · GW(p)

Hen, I told you how I imagine such a universe, and you told me I couldn't be imagining it! Maybe you could undertake not to gainsay further hypotheses.

Replies from: None
comment by [deleted] · 2014-05-04T19:51:36.703Z · LW(p) · GW(p)

I found your suggestion to be implausible for two reasons: first, I don't think the idea of epistemically significant qualia is defensible, and second, even on the condition that it is, I don't think the idea of a universe of nothing but a single quale (one having epistemic significance) is defensible. Both of these points would take some time to work out, and it struck me in our last exchange that you had neither the patience nor the good will to do so, at least not with me. But I'd be happy to discuss the matter if you're interested in hearing what I have to say.

comment by Eugine_Nier · 2014-05-04T01:44:30.005Z · LW(p) · GW(p)

So before we go on, please tell me what you think I'm claiming?

You said:

I just also think it's a necessary fact.

I'm not sure what you mean by "necessary", but the most obvious interpretation is that you think it's necessarily impossible for the world to not be run by math or at least for humans to understand a world that doesn't.

Replies from: None
comment by [deleted] · 2014-05-04T15:45:03.625Z · LW(p) · GW(p)

it's [probably] impossible for humans to understand a world that [isn't subject to mathematical analysis].

This is my claim, and here's the thought: thinking things are natural, physical objects and they necessarily have some internal complexity. Further, thoughts have some basic complexity: I can't engage in an inference with a single term.

Any universe which would not in principle be subject to mathematical analysis is a universe in which there is no quantity of anything. So it can't, for example, involve any space or time, no energy or mass, no plurality of bodies, no forces, nothing like that. It admits of no analysis in terms of propositional logic, so Bayes is right out, as is any understanding of causality. This, it seems to me, would preclude the possibility of thought altogether. It may be that the world we live in is actually like that, and all its multiplicity is merely the contribution of our minds, so I won't venture a claim about the world as such. So far as I know, the fact that worlds admit of mathematical analysis is a fact about thinking things, not worlds.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-05-04T23:06:21.048Z · LW(p) · GW(p)

thinking things are natural, physical objects and they necessarily have some internal complexity. Further, thoughts have some basic complexity: I can't engage in an inference with a single term.

What do you mean by "complexity"? I realize you have an intuitive idea, but it could very well be that your idea doesn't make sense when applied to whatever the real universe is.

Any universe which would not in principle be subject to mathematical analysis is a universe in which there is no quantity of anything.

Um, that seems like a stretch. Just because some aspects of the universe are subject to mathematical analysis doesn't necessarily mean the whole universe is.

Replies from: None
comment by [deleted] · 2014-05-05T00:45:13.062Z · LW(p) · GW(p)

What do you mean by "complexity"? I realize you have an intuitive idea, but it could very well be that your idea doesn't make sense when applied to whatever the real universe is.

For my purposes, complexity is: involving (in the broadest sense of that word) more than one (in the broadest sense of that word) thing (in the broadest sense of that word). And remember, I'm not talking about the real universe, but about the universe as it appears to creatures capable of thinking.

Um, that seems like a stretch. Just because some aspects of the universe are subject to mathematical analysis doesn't necessarily mean the whole universe is.

I think it does, if you're granting me that such a world could be distinguished into parts. It doesn't mean we could have the rich mathematical understanding of laws we do now, but that's a higher bar than I'm talking about.

Replies from: ChristianKl
comment by ChristianKl · 2014-05-05T02:02:39.207Z · LW(p) · GW(p)

You can always "use" analysis the issue is whether it gives you correct answers. It only gives you the correct answer if the universe obeys certain axioms.

Replies from: None
comment by [deleted] · 2014-05-05T03:39:10.722Z · LW(p) · GW(p)

Well, this gets us back to the topic that spawned this whole discussion: I'm not sure we can separate the question 'can we use it' from 'does it give us true results' with something like math. If I'm right that people always have mostly true beliefs, then when we're talking about the more basic ways of thinking (not Aristotelian dynamics, but counting, arithmetic, etc.) the fact that we can use them is very good evidence that they mostly return true results. So if you're right that you can always use, say, arithmetic, then I think we should conclude that a universe is always subject to analysis by arithmetic.

You may be totally wrong that you can always use these things, of course. But I think you're probably right and I can't make sense of any suggestion to the contrary that I've heard yet.

Replies from: private_messaging
comment by private_messaging · 2014-05-05T04:02:41.801Z · LW(p) · GW(p)

One could mathematically describe things not analysable by arithmetic, though...

Replies from: None
comment by [deleted] · 2014-05-05T04:07:23.088Z · LW(p) · GW(p)

Fair point, arithmetic's not a good example of a minimum for mathematical description.

comment by ChristianKl · 2014-04-27T16:57:17.227Z · LW(p) · GW(p)

More broadly speaking, anything that is going to be knowable at all is going to be rational and subject to rational understanding.

The idea of rational understanding rests on the fact that you are separated from the object that you are trying to understand and the object itself doesn't change if you change your understanding of it.

Then there the halting problem. There are a variety of problems that are NP. Those problems can't be understood by doing a few experiments and then extrapolating general rules from your experiments. I'm not quite firm with the mathematical terminology but I think NP problems are not subject to thinks like calculus that are covered in what Wikipedia describes as Mathematical analysis.

Heinz von Förster makes the point that children have to be taught that "green" is no valid answer for the question: "What's 2+2?". I personally like his German book titled: "Truth is the invention of a liar". Heinz von Förster headed the started the Biological Computer Laboratory in 1958 and came up with concepts like second-order cybernetics.

As far as fictional worlds go, Terry Pratchetts discworld runs on narrativium instead of math.

More broadly speaking, anything that is going to be knowable at all is going to be rational and subject to rational understanding.

That's true as long there no revelations of truth by Gods or other magical processes. In an universe where you can get the truth through magical tarot reading that assumption is false.

Replies from: None
comment by [deleted] · 2014-04-27T21:21:04.360Z · LW(p) · GW(p)

The idea of rational understanding rests on the fact that you are separated from the object that you are trying to understand and the object itself doesn't change if you change your understanding of it.

That's not obvious to me. Why do you think this?

That's true as long there no revelations of truth by Gods or other magical processes. In an universe where you can get the truth through magical tarot reading that assumption is false.

I also don't understand this inference. Why do you think revelations of truth by Gods or other magical processes, or tarot readings, mean that such a universe would a) be knowable, and b) not be subject to rational analysis?

Replies from: ChristianKl
comment by ChristianKl · 2014-04-28T00:27:52.702Z · LW(p) · GW(p)

It might depend a bit of what you mean with rationality. You lose objectivity.

Let's say I'm hypnotize someone. I'm in a deep state of rapport. That means my emotional state matters a great deal. If I label something that the person I'm talking to as unsuccessful, anxiety raises in myself. That anxiety will screw with the result I want to achieve. I'm better of if I blank my mind instead of engaging in rational analysis of what I'm doing.

I also don't understand this inference. Why do you think revelations of truth by Gods or other magical processes, or tarot readings, mean that such a universe would a) be knowable, and b) not be subject to rational analysis?

Logically A -> B is not the same thing as B -> A.

I said that it's possible for there to be knowledge that you can only get through a process besides rational analysis if you allow "magic".

Replies from: None
comment by [deleted] · 2014-04-28T03:13:58.315Z · LW(p) · GW(p)

If I label something that the person I'm talking to as unsuccessful, anxiety raises in myself. That anxiety will screw with the result I want to achieve.

I'm a little lost. So do you think these observations challenge the idea that in order to understand anyone, we need to assume they've got mostly true beliefs, and make mostly rational inferences?

comment by TheAncientGeek · 2014-04-25T17:05:58.748Z · LW(p) · GW(p)

I don't know what you mean by "run on math". Do qualia run in math?

Replies from: None
comment by [deleted] · 2014-04-25T18:26:03.214Z · LW(p) · GW(p)

It's not my phrase, and I don't particularly like it myself. If you're asking whether or not qualia are quanta, then I guess the answer is no, but in the sense that the measured is not the measure. It's a triviality that I can ask you how much pain you feel on a scale of 1-10, and get back a useful answer. I can't get at what the experience of pain itself is with a number or whatever, but then, I can't get at what the reality of a block of wood is with a ruler either.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-04-26T11:15:50.096Z · LW(p) · GW(p)

Then by imagining an all qualia universe, I can easily imagine a universe that doesn't run on math, for some values of an"runs on math"

Replies from: None
comment by [deleted] · 2014-04-26T13:53:29.644Z · LW(p) · GW(p)

I don't think you can imagine, or conceive of, an all qualia universe though.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-04-26T14:21:04.444Z · LW(p) · GW(p)

You don't get to tell me what I can imagine, though. All I have to do is imagine away the quantitative and structural aspects of my experience.

Replies from: None
comment by [deleted] · 2014-04-27T16:16:29.423Z · LW(p) · GW(p)

I rather think I do. If you told me you could imagine a euclidian triangle with more or less than 180 internal degrees, I would rightly say 'No you can't'. It's simply not true that we can imagine or conceive of anything we can put into (or appear to put into) words. And I don't think it's possible to imagine away things like space and time and keep hold of the idea that you're imagining a universe, or an experience, or anything like that. Time especially, and so long as I have time, I have quantity.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-04-27T16:31:51.166Z · LW(p) · GW(p)

That looks likes typical mind fallacy

I don't know where you are getting yourmfacts from, but it is well known that people's abilities at visualization vary considerably, so where's the "we"?

Having studied non euclidean geometry, I can easily imagine a triangle whose angles .sum to more than180 (hint: it's inscribed on the surface of a sphere)

Saying that non spatial or in temporal universes aren't really universes is a True Scotsman fallacy.

Non spatial and non temporal models have been serious proposed by physicists; perhaps you should talk to them.

Replies from: Jiro
comment by Jiro · 2014-04-27T16:41:16.982Z · LW(p) · GW(p)

It depends on what you mean by "imagine". I can't imagine a Euclidian triangle with less than 180 degrees in the sense of having a visual representation in my mind that I could then reproduce on a piece of paper. On the other hand, I can certainly imagine someone holding up a measuring device to a vague figure on a piece of paper and saying "hey, I don't get 180 degrees when I measure this".

Of course, you could say that the second one doesn't count since you're not "really" imagining a triangle unless you imagine a visual representation, but if you're going to say that you need to remember that all nontrivial attempts to imagine things don't include as much detail as the real thing. How are you going to define it so that eliminating some details is okay and eliminating other details isn't?

(And if you try that, then explain why you can't imagine a triangle whose angles add up to 180.05 degrees or some other amount that is not 180 but is close enough that you wouldn't be able to tell the difference in a mental image. And then ask yourself "can I imagine someone writing a proof that a Euclidian triangle's angles don't add up to 180 degrees?" without denying that you can imagine people writing proofs at all.)

Replies from: None
comment by [deleted] · 2014-04-27T21:10:03.961Z · LW(p) · GW(p)

These are good questions, and I think my general answer is this: in the context of this similar arguments, being able to imagine something is sometimes taken as evidence that it's at least a logical possibility. I'm fine with that, but it needs to be imagined in enough detail to capture the logical structure of the relevant possibility. If someone is going to argue, for example, that one can imagine a euclidian triangle with more or less than 180 internal degrees, the imagined state of affairs must have as least as much logical detail as does a euclidian triangle with 180 internal degrees. Will that exclude your 'vague shape' example, and probably your 'proof' example?

Replies from: Jiro, TheAncientGeek
comment by Jiro · 2014-04-28T02:26:04.623Z · LW(p) · GW(p)

Will that exclude your 'vague shape' example, and probably your 'proof' example?

It would exclude the vague shape example but I think it fails for the proof example.

Your reasoning suggests that if X is false, it would be impossible for me to imagine someone proving X. I think that is contrary to what most people mean when they say they can imagine something.

It's not clear what your reasoning implies when X is true. Either

  1. I cannot imagine someone proving X unless I can imagine all the steps in the proof
  2. I can imagine someone proving X as long as X is true, since having a proof would be a logical possibility as long as X is true

1) is also contrary to what most people think of as imagining. 2) would mean that it is possible me to not know whether or not I am imagining something. (I imagine someone proving X and I don't know if X is true. 2) means that if X is true I'm "really imagining" it and that if X is false, I am not.)

Replies from: None
comment by [deleted] · 2014-04-28T03:18:45.395Z · LW(p) · GW(p)

Your reasoning suggests that if X is false, it would be impossible for me to imagine someone proving X. I think that is contrary to what most people mean when they say they can imagine something.

Well, say I argue that it's impossible to write a story about a bat. It seems like it should be unconvincing for you to say 'But I can imagine someone writing a story about a bat...see, I'm imagining Tom, who's just written a story about a bat.' Instead, you'd need to imagine the story itself. I don't intend to talk about the nature of the imagination here, only to say that as a rule, showing that something is logically possible by way of imagining it requires that it have enough logical granularity to answer the challenge.

So I don't doubt that you could imagine someone proving that E-triangles have more than 180 internal degrees, but I am saying that not all imaginings are contenders in an argument about logical possibility. Only those ones which have sufficient logical granularity do.

Replies from: Jiro
comment by Jiro · 2014-04-28T05:39:54.181Z · LW(p) · GW(p)

I would understand "I can imagine..." in such a context to mean that it doesn't contain flaws that are basic enough to prevent me from coming up with a mental picture or short description. Not that it doesn't contain any flaws at all. It wouldn't make sense to have "I can imagine X" mean "there are no flaws in X"--that would make "I can imagine X" equivalent to just asserting X.

Replies from: None
comment by [deleted] · 2014-04-28T14:01:21.443Z · LW(p) · GW(p)

The issue isn't flaws or flawlessness. In my bat example, you could perfectly imagine Tom sitting in an easy chair with a glass of scotch saying to himself, 'I'm glad I wrote that story about the bat'. But that wouldn't help. I never said it's impossible for Tom to sit in a chair and say that, I said that it was impossible to write a story about a bat.

The issue isn't logical detail simpliciter, but logical detail relative to the purported impossibility. In the triangle case, you have to imagine, not Tom sitting in his chair thinking 'I'm glad I proved that E-triangles have more than 180 internal degrees' (no one could deny that that is possible) but rather the figure itself. It can be otherwise as vague and flawed as you like, so long as the relevant bits are there. Very likely, imagining the proof in the relevant way would require producing it.

And you are asserting something, you're asserting the possibility of something in virtue of the fact that it is in some sense actual. To say that something is logically impossible is to say that it can't exist anywhere, ever, not even in a fantasy. To imagine up that possibility is to make it sufficiently real to refute the claim of possibility, but only if you imagine, and thus make real, the precise thing being claimed to be impossible.

comment by TheAncientGeek · 2014-04-27T21:36:47.161Z · LW(p) · GW(p)

Are you sure it is logically impossible to have shameless and timeless universes? Who has put forward the necessity of space and time?

Replies from: None
comment by [deleted] · 2014-04-27T22:39:33.047Z · LW(p) · GW(p)

Are you sure it is logically impossible to have [spaceless] and timeless universes?

Dear me no! I have no idea if such a universe is impossible. I'm not even terribly confident that this universe has space or time.

I am pretty sure that space and time (or something like them) are a necessary condition on experience, however. Maybe they're just in our heads, but it's nevertheless necessary that they, or something like them, be in our heads. Maybe some other kind of creature thinks in terms of space, time, and fleegle, or just fleegle, time, and blop, or just blop and nizz. But I'm confident that such things will all have some common features, namely being something like a context for a multiplicity. I mean in the way time is a context for seeing this, followed by that, and space is a context for seeing this in that in some relation, etc.

Without something like this, it seems to me experience would always (except there's no time) only be of one (except an idea of number would never come up) thing, in which case it wouldn't be rich enough to be an experience. Or experience would be of nothing, but that's the same problem.

So there might be universes of nothing but qualia (or, really, quale) but it wouldn't be a universe in which there are any experiencing or thinking things. And if that's so, the whole business is a bit incoherent, since we need an experiencer to have a quale.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-04-27T22:48:49.778Z · LW(p) · GW(p)

Are you using experience to mean visual experience by any chance? How much spatial information are you getting from hearing?

PS your dogmatic Kantianism is now taken as read.

Replies from: None
comment by [deleted] · 2014-04-27T22:55:24.749Z · LW(p) · GW(p)

Tapping out.

comment by Lumifer · 2014-04-25T16:57:56.939Z · LW(p) · GW(p)

we've never imagined a universe, in fiction or through something like religion, which would fail to run on math.

That depends on your definition of "math".

For example, consider a simulated world where you control the code. Can you make it so that 2+2 in that simulation is sometimes 4, sometimes 15, and sometimes green? I don't see why not.

Replies from: RobinZ, None, TheAncientGeek
comment by RobinZ · 2014-04-25T18:53:35.136Z · LW(p) · GW(p)

For example, consider a simulated world where you control the code. Can you make it so that 2+2 in that simulation is sometimes 4, sometimes 15, and sometimes green? I don't see why not.

I think you're conflating the physical operation that we correlate with addition and the mathematical structure. 'Green' I'm not seeing, but I could write a computer program modeling a universe in which placing a pair of stones in a container that previously held a pair of stones does not always lead to that container holding a quadruplet of stones. In such a universe, the mathematical structure we call 'addition' would not be useful, but that doesn't say that the formalized reasoning structure we call 'math' would not exist, or could not be employed.

(In fact, if it's a computer program, it is obvious that its nature is susceptible to mathematical analysis.)

comment by [deleted] · 2014-04-25T18:29:57.015Z · LW(p) · GW(p)

For example, consider a simulated world where you control the code. Can you make it so that 2+2 in that simulation is sometimes 4, sometimes 15, and sometimes green?

I guess I could make it appear that way, sure, though I don't know if I could then recognize anything in my simulation as thinking or doing math. But in any case, that's not a universe in which 2+2=green, it's a universe in which it appears to. Maybe I'm just not being imaginative enough, and so you may need to help me flesh out the hypothetical.

Replies from: ChristianKl, shminux
comment by ChristianKl · 2014-05-05T12:30:04.280Z · LW(p) · GW(p)

If I write the simulation in python I can simple define my function for addition:

def addition(num1, num2):
if num1==2 and num2==2:
return choice([4, 15, "green"])
else:
return num1 + num2 # (or something more creative)

Unfortunately I don't know how to format the indention perfectly for this forum.

Replies from: None
comment by [deleted] · 2014-05-05T14:45:25.636Z · LW(p) · GW(p)

We don't need to go to the trouble of defining anything in Python. We can get the same result just by saying

2+2=green!

Replies from: ChristianKl
comment by ChristianKl · 2014-05-05T15:30:17.627Z · LW(p) · GW(p)

If I use python to simulate a world than it matters how things are defined in python.

It doesn't only appear that 2+2=green but it's that way at the level of the source code that depends how the world runs.

Replies from: None
comment by [deleted] · 2014-05-05T16:53:00.762Z · LW(p) · GW(p)

But it sounds to me like you're talking about the manipulation of signs, not about numbers themselves. We could make the set of signs '2+2=' end any way we like, but that doesn't mean we're talking about numbers. I donno, I think you're being too cryptic or technical or something for me, I don't really understand the point you're trying to make.

Replies from: ChristianKl
comment by ChristianKl · 2014-05-05T21:06:21.217Z · LW(p) · GW(p)

What do you mean with "the numbers themselves"? Peano axioms? I could imagine that n -> n+1 just doesn't apply.

comment by shminux · 2014-04-25T18:51:34.396Z · LW(p) · GW(p)

Math is what happens when you take your original working predictive toolkit (like counting sheep) and let it run on human wetware disconnected from its original goal of having to predict observables. Thus some form of math would arise in any somewhat-predictable universe evolving a calculational substrate.

Replies from: None
comment by [deleted] · 2014-04-26T02:10:04.350Z · LW(p) · GW(p)

Math is what happens when you take your original working predictive toolkit (like counting sheep) and let it run on human wetware disconnected from its original goal of having to predict observables.

That's an interesting problem. Do we have math because we make abstractions about the multitude of things around us, or must we already have some idea of math in the abstract just to recognize the multitude as a multitude? But I think I agree with the gist of what you're saying.

Replies from: shminux
comment by shminux · 2014-04-26T02:28:27.979Z · LW(p) · GW(p)

Just like I think of language as meta-grunting, I think of math as meta-counting. Some animals can count, and possibly add and subtract a bit, but abstracting it away from the application for the fun of it is what humans do.

comment by TheAncientGeek · 2014-04-25T18:04:06.106Z · LW(p) · GW(p)

Is "containing mathematical truth" the same as "running on math"?

comment by TheAncientGeek · 2014-04-24T19:19:03.263Z · LW(p) · GW(p)

Mixing truth and rationality is a failure mode. To know whether someone statement is true , you have to understand it,ad to understand it, you have to assume the speaker's rationality.

It's also a failure mode to attach "Irrational" directly to beliefs. A belief is rational if it can be supported by an argument, and you don't carry the space of all possible arguments round jn your head,

Replies from: Lumifer
comment by Lumifer · 2014-04-25T00:41:49.220Z · LW(p) · GW(p)

A belief is rational if it can be supported by an argument

That's an... interesting definition of "rational".

Replies from: Jayson_Virissimo, TheAncientGeek
comment by Jayson_Virissimo · 2014-04-25T01:31:07.317Z · LW(p) · GW(p)

Puts on Principle of Charity hat...

Maybe TheAncientGreek means:

(1) a belief is rational if it can be supported by a sound argument

(2) a belief is rational if it can be supported by a valid argument with probable premises

(3) a belief is rational if it can be supported by an inductively strong argument with plausible premises

(4) a belief is rational if it can be supported by an argument that is better than any counterarguments the agent knows of

etc...

Although personally, I think it is more helpful to think of rationality as having to do with how beliefs cohere with other beliefs and about how beliefs change when new information comes in than about any particular belief taken in isolation.

Replies from: Lumifer, TheAncientGeek
comment by Lumifer · 2014-04-25T03:25:36.018Z · LW(p) · GW(p)

I can't but note that the world "reality" is conspicuously absent here...

Replies from: TheAncientGeek, Jayson_Virissimo, Eugine_Nier
comment by TheAncientGeek · 2014-04-25T08:25:02.164Z · LW(p) · GW(p)

That there is empirical evidence for something is good argument for it.

comment by Jayson_Virissimo · 2014-04-29T21:20:16.555Z · LW(p) · GW(p)

I can't but note that the world "reality" is conspicuously absent here...

Arguments of type (1) necessarily track reality (it is pretty much defined this way), (2) may or may not depending on the quality of the premises, (3) often does, and sometimes you just can't do any better than (4) with available information and corrupted hardware.

Just because I didn't use the word "reality" doesn't really mean much.

comment by Eugine_Nier · 2014-04-30T06:28:23.523Z · LW(p) · GW(p)

A definition of "rational argument" that explicitly referred to "reality" would be a lot less useful, since checking which arguments are rational is one of the steps in figuring what' real.

Replies from: Lumifer
comment by Lumifer · 2014-04-30T14:41:33.731Z · LW(p) · GW(p)

checking which arguments are rational is one of the steps in figuring what' real

I am not sure this is (necessarily) the case, can you unroll?

Generally speaking, arguments live in the map and, in particular, in high-level maps which involve abstract concepts and reasoning. If I check the reality of the stone by kicking it and seeing if my toe hurts, no arguments are involved. And from the other side, classical logic is very much part of "rational arguments" and yet needs not correspond to reality.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-05-01T04:03:53.846Z · LW(p) · GW(p)

If I check the reality of the stone by kicking it and seeing if my toe hurts, no arguments are involved.

That tends to work less well for things that one can't directly observe, e.g., how old is the universe, or things where there is confounding noise, e.g., does this drug help.

Replies from: Lumifer
comment by Lumifer · 2014-05-01T15:00:54.613Z · LW(p) · GW(p)

That was a counterexample, not a general theory of cognition...

comment by TheAncientGeek · 2014-04-25T08:41:22.099Z · LW(p) · GW(p)

There isn't a finite list of rational beliefs, because someone could think of an argument for a belief that you haven't thought of.

There isn't a finite list of correct arguments either. People can invent new ones.

comment by TheAncientGeek · 2014-04-25T17:25:07.514Z · LW(p) · GW(p)

Well, it's not too compatible with self congratulations "rationality".

comment by RobinZ · 2014-04-24T15:47:31.563Z · LW(p) · GW(p)

I believe this disagreement is testable by experiment.

Replies from: Lumifer
comment by Lumifer · 2014-04-24T16:15:41.322Z · LW(p) · GW(p)

Do elaborate.

Replies from: RobinZ
comment by RobinZ · 2014-04-24T17:00:01.407Z · LW(p) · GW(p)

If you would more reliably understand what people mean by specifically treating it as the product of a rational and intelligent person, then executing that hack should lead to your observing a much higher rate of rationality and intelligence in discussions than you would previously have predicted. If the thesis is true, many remarks which, using your earlier methodology, you would have dismissed as the product of diseased reasoning will prove to be sound upon further inquiry.

If, however, you execute the hack for a few months and discover no change in the rate at which you discover apparently-wrong remarks to admit to sound interpretations, then TheAncientGeek's thesis would fail the test.

Replies from: TheAncientGeek, Lumifer
comment by TheAncientGeek · 2014-04-24T17:53:32.313Z · LW(p) · GW(p)

You will also get less feedback on the lines of "you just don't get it"

Replies from: RobinZ
comment by RobinZ · 2014-04-24T17:59:56.582Z · LW(p) · GW(p)

True, although being told less often that you are missing the point isn't, in and of itself, all that valuable; the value is in getting the point of those who otherwise would have given up on you with a remark along those lines.

(Note that I say "less often"; I was recently told that this criticism of Tom Godwin's "The Cold Equations", which I had invoked in a discussion of "The Ones Who Walk Away From Omelas", missed the point of the story - to which I replied along the lines of, "I get the point, but I don't agree with it.")

comment by Lumifer · 2014-04-24T17:16:28.575Z · LW(p) · GW(p)

That looks like a test of my personal ability to form correct first-impression estimates.

Also "will prove to be sound upon further inquiry" is an iffy part. In practice what usually happens is that statement X turns out to be technically true only under conditions A, B, and C, however in practice there is the effect Y which counterbalances X and the implementation of X is impractical for a variety of reasons, anyway. So, um, was statement X "sound"? X-/

Replies from: RobinZ
comment by RobinZ · 2014-04-24T17:51:09.746Z · LW(p) · GW(p)

That looks like a test of my personal ability to form correct first-impression estimates.

Precisely.

Also "will prove to be sound upon further inquiry" is an iffy part. In practice what usually happens is that statement X turns out to be technically true only under conditions A, B, and C, however in practice there is the effect Y which counterbalances X and the implementation of X is impractical for a variety of reasons, anyway. So, um, was statement X "sound"? X-/

Ah, I see. "Sound" is not the right word for what I mean; what I would expect to occur if the thesis is correct is that statements will prove to be apposite or relevant or useful - that is to say, valuable contributions in the context within which they were uttered. In the case of X, this would hold if the person proposing X believed that those conditions applied in the case described.

A concrete example would be someone who said, "you can divide by zero here" in reaction to someone being confused by a definition of the derivative of a function in terms of the limit of a ratio.

comment by TheAncientGeek · 2014-04-23T20:35:21.650Z · LW(p) · GW(p)

Because you are not engaged in establishing facts about how smart someone is, you are instead trying to establish facts about what they mean by what they say.

comment by TheAncientGeek · 2014-04-23T19:24:49.826Z · LW(p) · GW(p)

I do see what you are describing as being the standard PoC at all. May I suggest you are call it something else.

Replies from: RobinZ
comment by RobinZ · 2014-04-23T21:03:38.510Z · LW(p) · GW(p)

How does the thing I am vaguely waving my arms at differ from the "standard PoC"?

comment by TheAncientGeek · 2014-04-23T17:34:13.065Z · LW(p) · GW(p)

Depends. Have you tried charitable interpretations of what they are saying that dont make them stupid, or are you going with your initial reaction?

Replies from: Lumifer
comment by Lumifer · 2014-04-23T18:14:07.064Z · LW(p) · GW(p)

I'm thinking that charity should not influence epistemology. Adjusting your map for charitable reasons seems like the wrong thing to do.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-04-23T18:38:55.942Z · LW(p) · GW(p)

I think you need to readup on the Principle Of Charity and realise that it's about accurate communication ,not some vague notion of niceness.

Replies from: Lumifer
comment by Lumifer · 2014-04-23T18:49:23.295Z · LW(p) · GW(p)

That's what my question upthread was about -- is the principle of charity as discussed in this thread a matter of my belief (=map) or is it only about communication?

Replies from: TheAncientGeek, TheAncientGeek
comment by TheAncientGeek · 2014-04-24T10:33:00.703Z · LW(p) · GW(p)

It's both. You need charity to communicate accurately, and also to form accurate beliefs. The fact that people you haven't been charitable towards seem stupid to you is not reliable data.

comment by TheAncientGeek · 2014-04-23T19:11:36.839Z · LW(p) · GW(p)

Research and discover.

Replies from: Vaniver
comment by Vaniver · 2014-04-23T22:08:42.393Z · LW(p) · GW(p)

How else would you interpret this series of clarifying questions?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-04-23T22:16:12.934Z · LW(p) · GW(p)

I can tell someone the answer, but they may might not believe me. The might better off researching it form reliable sources than trying to figure it out from yet another stupid internet argument.

comment by TheAncientGeek · 2014-04-24T10:20:57.854Z · LW(p) · GW(p)

If you haven't attempted to falsity your belief by being charitable, then you should stop believing it. It's bad data.

comment by [deleted] · 2014-03-02T16:00:57.106Z · LW(p) · GW(p)

Ok, there's no way to say this without sounding like I'm signalling something, but here goes.

As I've already explained, there's a difficult problem here about how to be appropriately modest about our own rationality. When I say something, I never think it's stupid, otherwise I wouldn't say it. But at least I'm not so arrogant as to go around demanding other people acknowledge my highly advanced rationality. I don't demand that they accept "Chris isn't saying anything stupid" as an axiom in order to engage with me.

"If you can't say something you are very confident is actually smart, don't say anything at all." This is, in fact, why I don't say very much, or say it in a lot of detail, much of the time. I have all kinds of thoughts about all kinds of things, but I've had to retract sincerely-held beliefs so many times I just no longer bother embarrassing myself by opening my big dumb mouth.

Somewhat relatedly, I've begun to wonder if "rationalism" is really good branding for a movement. Rationality is systematized winning, sure, but the "rationality" branding isn't as good for keeping that front and center, especially compared to, say the effective altruism meme.

In my opinion, it's actually terrible branding for a movement. "Rationality is systematized winning"; ok, great, what are we winning at? Rationality and goals are orthogonal to each-other, after all, and at a first look, LW's goals can look like nothing more than an attempt to signal "I'm smarter than you" or even "I'm more of an emotionless Straw-Vulcan cyborg than you" to the rest of the world.

This is not a joke, I actually have a friend who virulently hates LW and resents his friends who get involved in it because he thinks we're a bunch of sociopathic Borg wannabes following a cult of personality. You might have an impulse right now to just call him an ignorant jerk and be done with it, but look, would you prefer the world in which you get to feel satisfied about having identified an ignorant jerk, or would you prefer the world in which he's actually persuaded about some rationalist ideas, makes some improvements to his life, maybe donates money to MIRI/CFAR, and so on? The latter, unfortunately, requires social engagement with a semi-hostile skeptic, which we all know is much harder than just calling him an asshole, taking our ball, and going home.

So anyway, what are we trying to do around here? It should be mentioned a bit more often on the website.

(At the very least, my strongest evidence that we're not a cult of personality is that we disagree amongst ourselves about everything. On the level of sociological health, this is an extremely good sign.)

That bit of LessWrong jargon is merely silly. Worse, I think, is the jargon around politics. Recently, a friend gave "they avoid blue-green politics" as a reason LessWrongians are more rational than other people. It took a day before it clicked that "blue-green politics" here basically just meant "partisanship." But complaining about partisanship is old hat—literally. America's founders were fretting about it back in the 18th century. Nowadays, such worries are something you expect to hear from boringly middle-brow columnists at major newspapers, not edgy contrarians.

While I do agree about the jargon issue, I think the contrarianism and the meta-contrarianism often make people feel they've arrived to A Rational Answer, at which point they stop thinking.

For instance, if Americans have always thought their political system is too partisan, has anyone in political science actually bothered to construct an objective measurement and collected time-series data? What does the time-series data actually say? Besides, once we strip off the tribal signalling, don't all those boringly mainstream ideologies actually have some few real points we could do with engaging with?

(Generally, LW is actually very good at engaging with those points, but we also simultaneously signal that we're adamantly refusing to engage in partisan politics. It's like playing an ideological Tsundere: "Baka! I'm only doing this because it's rational. It's not like I agree with you or anything! blush")

I've made no secret of the fact that I'm not a big fan of the principle of charity—often defined as the rule that you should interpret other people's arguments on the assumption that they are not saying anything stupid. The problem with this is that other people are often saying something stupid.

Ok, but then let me propose a counter-principle: Principle of Informative Calling-Out. I actively prefer to be told when I'm wrong and corrected. Unfortunately, once you ditch the principle of charity, the most common response to an incorrect statement often becomes, essentially, "Just how stupid are you!?", or other forms of low-information signalling about my interlocutor's intelligence and rationality compared to mine.

I need to emphasize that I really do think philosophers are showing off real intelligence, not merely showing off faux-cleverness. GRE scores suggest philosophers are among the smartest academics, and their performance is arguably made more impressive by the fact that GRE quant scores are bimodally distributed based on whether your major required you to spend four years practicing your high school math, with philosophy being one of the majors that doesn't grant that advantage. Based on this, if you think it's wrong to dismiss the views of high-IQ people, you shouldn't be dismissive of mainstream philosophy. But in fact I think LessWrong's oft-noticed dismissiveness of mainstream philosophy is largely justified.

You should be looking at this instrumentally. The question is not whether you think "mainstream philosophy" (the very phrase is suspect, since mainstream academic philosophy divides into a number of distinct schools, Analytic and Continental being the top two off the top of my head) is correct. The question is whether you think you will, at some point, have any use for interacting with mainstream philosophy and its practitioners. If they will be useful to you, it is worth learning their vocabulary and their modes of operation in order to, when necessary, enlist their aid, or win at their game.

comment by Viliam_Bur · 2014-03-01T19:53:51.733Z · LW(p) · GW(p)

there seem to be a lot of people in the LessWrong community who imagine themselves to be (...) paragons of rationality who other people should accept as such.

Uhm. My first reaction is to ask "who specifically?", because I don't have this impression. (At least I think most people here aren't like this, and if a few happen to be, I probably did not notice the relevant comments.) On the other hand, if I imagine myself at your place, even if I had specific people in mind, I probably wouldn't want to name them, to avoid making it a personal accusation instead of observation of trends. Now I don't know what do to.

Perhaps could someone else give me a few examples of comments (preferably by different people) where LW members imagine themselves paragons of rationality and ask other people to accept them as such? (If I happen to be such an example myself, that information would be even more valuable to me. Feel free to send me a private message if you hesitate to write it publicly, but I don't mind if you do. Crocker's rules, Litany of Tarski, etc.)

I've encountered people talking as if it's ridiculous to suggest they might sometimes respond badly to being told the truth about certain subjects.

I do relate to this one, even if I don't know if I have expressed this sentinent on LW. I believe I am able to listen to opinions that are unpleasant or that I disagree with, without freaking out, much more than an average person, although not literally always. It's stronger in real life than online, because in real life I take time to think, while on internet I am more in "respond and move on (there are so many other pages to read)" mode. Some other people have told me they noticed this about me, so it's not just my own imagination.

Okay, you probably didn't mean me with this one... I just wanted to say I don't see this as a bad thing per se, assuming the person is telling the truth. And I also believe that LW has a higher ratio of people for whom this is true, compared with average population, although not everyone here is like that.

Yet the readiness of members of the LessWrong community to disagree with and criticize each other suggests we don't actually think all that highly of each other's rationality.

I don't consider everyone here rational, and it's likely some people don't consider me rational. But there are also other reasons for frequent disagreement.

Aspiring rationalists are sometimes encouraged to make bets, because bet is a tax on bullshit, and paying a lot of tax may show you your irrationality and encourage you to get rid of it. Even if it's not about money; we need to calibrate ourselves. Some of us use the prediction book, CFAR has developed the calibration game.

Analogically, if I have an opinion, I say it in a comment, because that's similar to making a bet. If I am wrong, I will likely get a feedback, which is an opportunity to learn. I trust other people here intellectually to disagree with me only if they have a good reason to disagree, and I also trust them emotionally that if I happen to write something stupid, they will just correct me and move on (instead of e.g. reminding me of my mistake for the rest of my life). Because of this, I post here my opinions more often, and voice them more strongly if I feel it's deserved. Thus, more opportunity for disagreement.

On a different website I might keep quiet instead or speak very diplomatically, which would give less opportunity to disagreement; but it wouldn't mean I have higher estimate on that community's rationality; quite the opposite. If disagreement is disrespect, then tiptoeing around the mere possibility of disagreement means considering the other person insane. Which is how I learned to behave outside of LW; and I am still not near the level of disdain that a Carnegie-like behavior would require.

I've heard people cite this as a reason to be reluctant to post/comment (again showing they know intuitively that disagreement is disrespect).

We probably should have some "easy mode" for the beginners. But we shouldn't turn the whole website into the "easy mode". Well, this probably deserves a separate discussion.

Yet I've heard people suggest that you must never be dismissive of things said by smart people, or that the purportedly high IQ of the LessWrong community means people here don't make bad arguments.

On a few occassions I made fun of Mensa on LW, and I don't remember anyone contradicting me, so I thought we have a consensus that high IQ does not imply high rationality (although some level may be necessary). Stanovich wrote a book about it, and Kaj Sotala reviewed it here.

You make a few very good points in the article. Confusing intelligence with rationality is bad; selective charity is unfair; asking someone to treat me as a perfect rationalist is silly; it's good to apply healthy cynicism also to your own group; and we should put more emphasis on being aspiring rationalists. It just seems to me that you perceive the LW community as less rational than I do. Maybe we just have different people in mind when we think about the community. (By the way, I am curious if there is a correlation between people who complain that you don't believe in their sanity, and people who are reluctant to comment on LW because of the criticism.)

comment by Sniffnoy · 2014-03-01T22:32:12.223Z · LW(p) · GW(p)

I've made no secret of the fact that I'm not a big fan of the principle of charity—often defined as the rule that you should interpret other people's arguments on the assumption that they are not saying anything stupid. The problem with this is that other people are often saying something stupid. Because of that, I think charitable is over-rated compared to fair and accurate reading. When someone says something stupid, you don't have to pretend otherwise, but it's really important not to attribute to people stupid things they never said.

Getting principle of charity right can be hard in general. A common problem is when something can be interpreted as stupid in two different ways; namely, it has an interpretation which is obviously false, and another interpretation which is vacuous or trivial. (E.g.: "People are entirely selfish.") In cases like this, where it's not clear what the charitable reading is, it may just be best to point out what's going on. ("I'm not certain what you mean by that. I see two ways of interpreting your statement, but one is obviously false, and the other is vacuous.") Assuming they don't mean the wrong thing is not the right answer, as if they do, you're sidestepping actual debate. Assuming they don't mean the trivial thing is not the right answer, because sometimes these statements are worth making. Whether a statement is considered trivial or not depends on who you're talking to, and so what statements your interlocutor considers trivial will depend on who they've been talking to and reading. E.g., if they've been hanging around with non-reductionists, they might find it worthwhile to restate the basic principles of reductionism, which here we would consider trivial; and so it's easy to make a mistake and be "charitable" to them by assuming they're arguing for a stronger but incorrect position (like some sort of greedy reductionism). Meanwhile people are using the same words to mean different things because they haven't calibrated abstract words against actual specifics and the debate becomes terribly unproductive.

Really, being explicit about how you're interpreting something if it's not the obvious way is probably best in general. ("I'm going to assume you mean [...], because as written what you said has an obvious error, namely, [...]".) A silent principle of charity doesn't seem very helpful.

But for a helpful principle of charity, I don't think I'd go for anything about what assumptions you should be making. ("Assume the other person is arguing in good faith" is a common one, and this is a good idea, but if you don't already know what it means, it's not concrete enough to be helpful; what does that actually cash out to?) Rather, I'd go for one about what assumptions you shouldn't make. That is to say: If the other person is saying something obviously stupid (or vacuous, or whatever), consider the possibility that you are misinterpreting them. And it would probably be a good idea to ask for clarification. ("Apologies, but it seems to me you're making a statement that's just clearly false, because [...]. Am I misunderstanding you? Perhaps your definition of [...] differs from mine?") Then perhaps you can get down to figuring out where your assumptions differ and where you're using the same words in different ways.

But honestly a lot of the help of the principle of charity may just be to get people to not use the "principle of anti-charity", where you assume your interlocutor means the worst possible (in whatever sense) thing they could possibly mean. Even a bad principle of charity is a huge improvement on that.

Replies from: JoshuaZ
comment by JoshuaZ · 2014-06-28T23:47:18.267Z · LW(p) · GW(p)

There are I think two other related aspects that are relevant. First, there's some tendency to interpret what other people say in a highly non-charitable or anti-charitable fashion when one already disagrees with them about something. So a principle of charity helps to counteract that. Second, even when one is using a non-silent charity principle, it can if one is not careful, come across as condescending, so it is important to phrase it in a way that minimizes those issues.

comment by Richard_Kennaway · 2014-04-14T08:36:49.766Z · LW(p) · GW(p)

As Cowen and Hanson put it, "Merely knowing someone else’s opinion provides a powerful summary of everything that person knows, powerful enough to eliminate any differences of opinion due to differing information." So sharing evidence the normal way shouldn't be necessary.

This is one of the loonier[1] ideas to be found on Overcoming Bias (and that's quite saying something). Exercise for the reader: test this idea that sharing opinions screens off the usefulness of sharing evidence with the following real-world scenario. I have participated in this scenario several times and know what the correct answer is.

You are on the programme committee of a forthcoming conference, which is meeting to decide which of the submitted papers to accept. Each paper has been refereed by several people, each of whom has given a summary opinion (definite accept, weak accept, weak reject, or definite reject) and supporting evidence for the opinion.

To transact business most efficiently, some papers are judged solely on the summary opinions. Every paper rated a definite accept by every referee for that paper is accepted without further discussion, because if three independent experts all think it's excellent, it probably is, and further discussion is unlikely to change that decision. Similarly, every paper firmly rejected by every referee is rejected. For papers that get a uniformly mediocre rating, the committee have to make some judgement about where to draw the line between filling out the programme and maintaining a high standard.

That leaves a fourth class: papers where the referees disagree sharply. Here is a paper where three referees say definitely accept, one says definitely reject. On another paper, it's the reverse. Another, two each way.

How should the committee decide on these papers? By combining the opinions only, or by reading the supporting evidence?

ETA: [1] By which I mean not "so crazy it must be wrong" but "so wrong it's crazy".

Replies from: gwern, ChristianKl
comment by gwern · 2014-04-25T18:29:49.070Z · LW(p) · GW(p)

This is one of the loonier[1] ideas to be found on Overcoming Bias (and that's quite saying something). Exercise for the reader: test this idea that sharing opinions screens off the usefulness of sharing evidence with the following real-world scenario. I have participated in this scenario several times and know what the correct answer is.

Verbal abuse is not a productive response to the results of an abstract model. Extended imaginary scenarios are not a productive response either. Neither explains why the proofs are wrong or inapplicable, or if inapplicable, why they do not serve useful intellectual purposes such as proving some other claim by contradiction or serving as an ideal to aspire to. Please try to do better.

Replies from: Richard_Kennaway, Richard_Kennaway
comment by Richard_Kennaway · 2014-04-25T18:49:22.642Z · LW(p) · GW(p)

Extended imaginary scenarios are not a productive response either.

As I said, the scenario is not imaginary.

Please try to do better.

I might have done so, had you not inserted that condescending parting shot.

Replies from: ChristianKl, gwern
comment by ChristianKl · 2014-04-25T23:59:23.517Z · LW(p) · GW(p)

As I said, the scenario is not imaginary.

Your real world scenario tells you that sometimes sharing evidence will move judgements in the right direction.

Thinking that Robert Hanson or someone else on Overcoming Bias hasn't thought of that argument is naive. Robert Hanson might sometimes make arguments that are wrong but he's not stupid. If you are treating him as if he would be, then you are likely arguing against a strawman.

Apart from that your example also has strange properties like only four different kind of judgements that reviewers are allowed to make. Why would anyone choose four?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-04-30T08:11:08.578Z · LW(p) · GW(p)

Your real world scenario tells you that sometimes sharing evidence will move judgements in the right direction.

It is a lot more than "sometimes". In my experience (mainly in computing) no journal editor or conference chair will accept a referee's report that provides nothing but an overall rating of the paper. The rubric for the referees often explicitly states that. Where ratings of the same paper differ substantially among referees, the reasons for those differing judgements are examined.

Apart from that your example also has strange properties like only four different kind of judgements that reviewers are allowed to make. Why would anyone choose four?

The routine varies but that one is typical. A four-point scale (sometimes with a fifth not on the same dimension: "not relevant to this conference", which trumps the scalar rating). Sometimes they ask for different aspects to be rated separately (originality, significance, presentation, etc.). Plus, of course, the rationale for the verdict, without which the verdict will not be considered and someone else will be found to referee the paper properly.

Anyone is of course welcome to argue that they're all doing it wrong, or to found a journal where publication is decided by simple voting rounds without discussion. However, Aumann's theorem is not that argument, it's not the optimal version of Delphi (according to the paper that gwern quoted), and I'm not aware of any such journal. Maybe Plos ONE? I'm not familiar with their process, but their criteria for inclusion are non-standard.

Replies from: ChristianKl
comment by ChristianKl · 2014-04-30T13:45:30.074Z · LW(p) · GW(p)

It is a lot more than "sometimes". In my experience (mainly in computing) no journal editor or conference chair will accept a referee's report that provides nothing but than an overall rating of the paper.

That just tells us that the journals believe that the rating isn't the only thing that matters. But most journals just do things that make sense to them. The don't draft their policies based on findings of decision science.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-04-30T13:57:06.597Z · LW(p) · GW(p)

But most journals just do things that make sense to them. The don't draft their policies based on findings of decision science.

Those findings being? Aumann's theorem doesn't go the distance. Anyway, I have no knowledge of how they draft their policies, merely some of what those policies are. Do you have some information to share here?

Replies from: ChristianKl
comment by ChristianKl · 2014-04-30T16:22:53.317Z · LW(p) · GW(p)

For example that likert scales are nice if you want someone to give you their opinion.

Of course it might sense to actually do run experiments. Big publishers do rule over 1000's of journals so it should be easy for them to do the necessary research if the wanted to do so.

comment by gwern · 2014-04-25T18:58:09.195Z · LW(p) · GW(p)

As I said, the scenario is not imaginary.

Yes, it is. You still have not addressed what is either wrong with the proofs or why their results are not useful for any purpose.

I might have done so, had you not inserted that condescending parting shot.

Wow. So you started it, and now you're going to use a much milder insult as an excuse not to participate? Please try to do better.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-04-25T19:24:44.170Z · LW(p) · GW(p)

Well, the caravan moves on. That -1 on your comment isn't mine, btw.

comment by Richard_Kennaway · 2014-04-25T20:01:00.147Z · LW(p) · GW(p)

This is one of the loonier[1] ideas to be found on Overcoming Bias (and that's quite saying something).

That was excessive, and I now regret having said it.

comment by ChristianKl · 2014-04-25T16:16:44.257Z · LW(p) · GW(p)

I think the most straightforward way is to do a second round. Let every referee read the opinions of the other referees and see whether they converge onto a shared judgement.

If you want a more formal name the Delphi method

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-04-25T16:39:00.941Z · LW(p) · GW(p)

What actually happens is that the reasons for the summary judgements are examined.

Three for, one against. Is the dissenter the only one who has not understood the paper, or the only one who knows that although the work is good, almost the same paper has just been accepted to another conference? The set of summary judgements is the same but the right final judgement is different. Therefore there is no way to get the latter from the former.

Aumann agreement requires common knowledge of each others' priors. When does this ever obtain? I believe Robin Hanson's argument about pre-priors just stands the turtle on top of another turtle.

Replies from: TheAncientGeek, gwern
comment by TheAncientGeek · 2014-04-25T16:49:11.520Z · LW(p) · GW(p)

People don't coincide in their priors, don't have access to the same evidence and aren't running off the same epistemology, and can't settle epistemologiical debates non-circularly......

Threr's a lot wrong with Aumannn, or at least the way some people use it.

comment by gwern · 2014-04-25T18:28:02.452Z · LW(p) · GW(p)

What actually happens is that the reasons for the summary judgements are examined.

Really? My understanding was that

Between each iteration of the questionnaire, the facilitator or monitor team (i.e., the person or persons administering the procedure) informs group members of the opinions of their anonymous colleagues. Often this “feedback” is presented as a simple statistical summary of the group response, usually a mean or median value, such as the average group estimate of the date before which an event will occur. As such, the feedback comprises the opinions and judgments of all group members and not just the most vocal. At the end of the polling of participants (after several rounds of questionnaire iteration), the facilitator takes the group judgment as the statistical average (mean or median) of the panelists’ estimates on the final round.

(From Rowe & Wright's "Expert opinions in forecasting: the role of the Delphi technique", in the usual Armstrong anthology.) From the sound of it, the feedback is often purely statistical in nature, and if it wasn't commonly such restricted feedback, it's hard to see why Rowe & Wright would criticize Delphi studies for this:

The use of feedback in the Delphi procedure is an important feature of the technique. However, research that has compared Delphi groups to control groups in which no feedback is given to panelists (i.e., non-interacting individuals are simply asked to re-estimate their judgments or forecasts on successive rounds prior to the aggregation of their estimates) suggests that feedback is either superfluous or, worse, that it may harm judgmental performance relative to the control groups (Boje and Murnighan 1982; Parenté, et al. 1984). The feedback used in empirical studies, however, has tended to be simplistic, generally comprising means or medians alone with no arguments from panelists whose estimates fall outside the quartile ranges (the latter being recommended by the classical definition of Delphi, e.g., Rowe et al. 1991). Although Boje and Murnighan (1982) supplied some written arguments as feedback, the nature of the panelists and the experimental task probably interacted to create a difficult experimental situation in which no feedback format would have been effective.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-04-25T18:50:58.533Z · LW(p) · GW(p)

What actually happens is that the reasons for the summary judgements are examined.

Really? My understanding was that

I was referring to what actually happens in a programme committee meeting, not the Delphi method.

Replies from: gwern
comment by gwern · 2014-04-25T18:57:06.242Z · LW(p) · GW(p)

I was referring to what actually happens in a programme committee meeting, not the Delphi method.

Fine. Then consider it an example of 'loony' behavior in the real world: Delphi pools, as a matter of fact, for many decades, have operated by exchanging probabilities and updating repeatedly, and in a number of cases performed well (justifying their continued usage). You don't like Delphi pools? That's cool too, I'll just switch my example to prediction markets.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-04-25T19:02:16.600Z · LW(p) · GW(p)

It would be interesting to conduct an experiment to compare the two methods for this problem. However, it is not clear how to obtain a ground truth with which to judge the correctness of the results. BTW, my further elaboration, with the example of one referee knowing that the paper under discussion was already published, was also non-fictional. It is not clear to me how any decision method that does not allow for sharing of evidence can yield the right answer for this example.

What have Delphi methods been found to perform well relative to, and for what sorts of problems?

Replies from: ChristianKl, gwern
comment by ChristianKl · 2014-04-25T19:55:06.437Z · LW(p) · GW(p)

However, it is not clear how to obtain a ground truth with which to judge the correctness of the results.

That assumes we don't have any criteria on which to judge good versus bad scientific papers.

You could train your model to predict the amount of citations that a paper will get. You can also look at variables such as reproduced papers or withdrawn papers.

Define a utility function that collapses such variables into a single one. Run a real world experiment in a journal and do 50% of the paper submissions with one mechanism and 50% with the other. Let a few years go by and then you evaluate the techniques based on your utility function.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-04-30T08:14:54.880Z · LW(p) · GW(p)

You could train your model to predict the amount of citations that a paper will get. You can also look at variables such as reproduced papers or withdrawn papers.

Define a utility function that collapses such variables into a single one. Run a real world experiment in a journal and do 50% of the paper submissions with one mechanism and 50% with the other. Let a few years go by and then you evaluate the techniques based on your utility function.

Something along those lines might be done, but an interventional experiment (creating journals just to test a hypothesis about refereeing) would be impractical. That leaves observational data-collecting, where one might compare the differing practices of existing journals. But the confounding problems would be substantial.

Or, more promisingly, you could do an experiment with papers that are already published and have a citation record, and have experimental groups of referees assess them, and test different methods of resolving disagreements. That might actually be worth doing, although it has the flaw that it would only be assessing accepted papers and not the full range of submissions.

Replies from: ChristianKl
comment by ChristianKl · 2014-04-30T09:06:37.028Z · LW(p) · GW(p)

Then no reason why you can't test different procedures in an existing journal.

comment by gwern · 2014-04-25T19:04:40.941Z · LW(p) · GW(p)

However, it is not clear how to obtain a ground truth with which to judge the correctness of the results.

It is if you take 5 seconds to think about it and compare it to any prediction market, calibration exercise, forecasting competition, betting company, or general market: finance, geo-political events, sporting events, almanac items. Ground-truths aren't exactly hard to come by.

What have Delphi methods been found to perform well relative to, and for what sorts of problems?

I already mentioned a review paper. It's strange you aren't already familiar with the strengths and weaknesses of decision & forecasting methods which involve people communicating only summaries of their beliefs to reach highly accurate results, given how loony you think these methods are and how certain of this you are.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-04-25T19:21:23.161Z · LW(p) · GW(p)

It is if you take 5 seconds to think about it. Finance. Geo-political events. Sporting events. Almanac items.

Sorry, I was still talking ("this problem") about the example I introduced.

I already mentioned a review paper.

Which recommends sharing "average estimates plus justifications" and "provide the mean or median estimate of the panel plus the rationales from all panellists". They found that providing reasons was better than only statistics of the judgements (see paragraph following your second quote). As happens in the programme committee. The main difference from Delphi is that the former is not structured into rounds in the same way. The referees send in their judgements, then the committee (a smaller subset of the referees) decides.

None of this is Aumann sharing of posteriors.

Replies from: gwern
comment by gwern · 2015-02-07T04:08:18.287Z · LW(p) · GW(p)

Sorry, I was still talking ("this problem") about the example I introduced.

If the method works on other problems, that seems like good evidence it works on your specific conference paper problem, no?

Which recommends sharing "average estimates plus justifications" and "provide the mean or median estimate of the panel plus the rationales from all panellists".

Indeed, it does - but it says that it works better than purely statistical feedback. More information is often better. But why is that relevant? You are moving the goalposts; earlier you asked:

It is not clear to me how any decision method that does not allow for sharing of evidence can yield the right answer for this example.

I brought up prediction markets and Delphi pools because they are mechanisms which function very similarly to Aumann agreement in sharing summaries rather than evidence, and yet they work. Whether they work is not the same question as whether there is anything which could work faster, and you are replying to the former question, which is indisputably true despite your skepticism, as if it were the latter. (It's obvious that simply swapping summaries may be slower than regular Aumannian agreement: you could imagine that instead of taking a bunch of rounds to converge, one sends all its data to the other, the other recomputes, and sends the new result back and convergence is achieved.)

comment by Sophronius · 2014-03-06T20:50:03.513Z · LW(p) · GW(p)

Edited to add: in the original post, I intended but forgot to emphasize that I think the correlation between IQ and rationality is weak at best. Do people disagree? Does anyone want to go out on a limb and say, "They aren't the same thing, but the correlation is still very strong?"

I'll go ahead and disagree with this. Sure, there's a lot of smart people who aren't rational, but then I would say that rationality is less common than intelligence. On the other hand, all the rational people I've met are very smart. So it seems really high intelligence is a necessary but not a sufficient condition. Or as Draco Malfoy would put it: "Not all Slytherins are Dark Wizards, but all Dark Wizards are from Slytherin."

I largely agree with the rest of your post Chris (upvoted), though I'm not convinced that the self-congratulatory part is Less Wrong's biggest problem. Really, it seems to me that a lot of people on Less Wrong just don't get rationality. They go through all the motions and use all of the jargon, but don't actually pay attention to the evidence. I frequently find myself wanting to yell "stop coming up with clever arguments and pay attention to reality!" at the screen. A large part of me worries that rationality really can't be taught; that if you can't figure out the stuff on Less Wrong by yourself, there's no point in reading about it. Or, maybe there's a selection effect and people who post more comments tend to be less rational than those who lurk?

Replies from: Richard_Kennaway, dthunt, David_Gerard
comment by Richard_Kennaway · 2014-07-04T21:56:51.264Z · LW(p) · GW(p)

A large part of me worries that rationality really can't be taught; that if you can't figure out the stuff on Less Wrong by yourself, there's no point in reading about it.

The teaching calls to what is within the pupil. To borrow a thought from Georg Christoph Lichtenberg, if an ass looks into LessWrong, it will not see a sage looking back.

I have a number of books of mathematics on my shelves. In principle, I could work out what is in them, but in practice, to do so I would have to be of the calibre of a multiple Field and Nobel medallist, and exercise that ability for multiple lifetimes. Yet I can profitably read them, understand them, and use that knowledge; but that does still require at least a certain level of ability and previous learning.

Or to put that another way, learning is in P, figuring out by yourself is in NP.

Replies from: Sophronius
comment by Sophronius · 2014-07-04T22:35:19.331Z · LW(p) · GW(p)

Agreed. I'm currently under the impression that most people cannot become rationalists even with training, but training those who do have the potential increases the chance that they will succeed. Still I think rationality cannot be taught like you might teach a university degree: A large part of it is inspiration, curiosity, hard work and wanting to become stronger. And it has to click. Just sitting in the classroom and listening to the lecturer is not enough.

Actually now that I think about it, just sitting in the classroom and listening to the lecturer for my economics degree wasn't nearly enough to gain a proper understanding either, yet that's all that most people did (aside from a cursory reading of the books of course). So maybe the problem is not limited to rationality but more about becoming really proficient at something in general.

comment by dthunt · 2014-07-04T15:45:40.338Z · LW(p) · GW(p)

Reading something and understanding/implementing it are not quite the same thing. It takes clock time and real effort to change your behavior.

I do not think it is unexpected that a large portion of the population on a site dedicated to writing, teaching, and discussing the skills of rationality is going to be, you know, still very early in the learning, and that some people will have failed to grasp a lesson they think they have grasped, and that others will think others have failed to grasp a lesson that they have failed to grasp, and that you will have people who just like to watch stuff burn.

I'm sure it's been asked elsewhere, and I liked the estimation questions on the 2013 survey; has there been a more concerted effort to see what being an experienced LWer translates to, in terms of performance on various tasks that, in theory, people using this site are trying to get better at?

Replies from: Sophronius
comment by Sophronius · 2014-07-04T18:05:26.999Z · LW(p) · GW(p)

Yes, you hit the nail on the head. Rationality takes hard work and lots of practice, and too often people on Less Wrong just spend time making clever arguments instead of doing the actual work of asking what the actual answer is to the actual question. It makes me wonder whether Less Wrongers care more about being seen as clever than they care about being rational.

As far as I know there's been no attempt to make a rationality/Bayesian reasoning test, which I think is a great pity because I definitely think that something like that could help with the above problem.

Replies from: dthunt
comment by dthunt · 2014-07-04T18:41:11.421Z · LW(p) · GW(p)

There are many calibration tests you can take (there are many articles on this site with links to see if you are over-or-underconfident on various subject tests - search for calibration).

What I don't know is if there has been some effort to do this across many questions, and compile the results anonymously for LWers.

I caution against jumping quickly to conclusions about "signalling". Frankly, I suspect you are wrong, and that most of the people here are in fact trying. Some might not be, and are merely looking for sparring matches. Those people are still learning things (albeit perhaps with less efficiency).

As far as "seeming clever", perhaps as a community it makes sense to advocate people take reasoning tests which do not strongly correlate with IQ, and that people generally do quite poorly on (I'm sure someone has a list, though it may be a relatively short list of tasks), which might have the effect of helping people to see stupid as part of the human condition, and not merely a feature of "non-high-IQ" humans.

Replies from: Sophronius
comment by Sophronius · 2014-07-04T18:52:14.828Z · LW(p) · GW(p)

Fair enough, that was a bit too cynical/negative. I agree that people here are trying to be rational, but you have to remember that signalling does not need to be on purpose. I definitely detect a strong impulse amongst the less wrong crowd to veer towards controversial and absurd topics rather than the practical and to make use of meta level thinking and complex abstract arguments instead of simple and solid reasoning. It may not feel that way from the inside, but from the outside point of view it does kind of look like Less Wrong is optimizing for being clever and controversial rather than rational.

I definitely say yes to (bayesian) reasoning tests. Someone who is not me needs to go do this right now.

Replies from: dthunt
comment by dthunt · 2014-07-04T19:06:47.243Z · LW(p) · GW(p)

I don't know that there is anything to do, or that should be done, about that outside-view problem. Understanding why people think you're being elitist or crazy doesn't necessarily help you avoid the label.

http://lesswrong.com/lw/kg/expecting_short_inferential_distances/

Replies from: Sophronius
comment by Sophronius · 2014-07-04T19:29:28.778Z · LW(p) · GW(p)

Huh? If the outside view tells you that there's something wrong, then the problem is not with the outside view but with the thing itself. It has nothing to do with labels or inferential distance. The outside-view is a rationalist technique used for viewing a matter you're personally involved in objectively by taking a step back. I'm saying that when you take a step back and look at things objectively, it looks like Less Wrong spends more time and effort on being clever than on being rational.

But now that you've brought it up, I'd also like to add that the habit on Less Wrong to assume that any criticism or disagreement must be because of inferential distance (really just a euphemism for saying the other guy is clueless) is an extremely bad one.

Replies from: Nornagest, dthunt
comment by Nornagest · 2014-07-04T19:45:50.862Z · LW(p) · GW(p)

The outside view isn't magic. Finding the right reference class to step back into, in particular, can be tricky, and the experiments the technique is drawn from deal almost exclusively with time forecasting; it's hard to say how well it generalizes outside that domain.

Don't take this as quoting scripture, but this has been discussed before, in some detail.

Replies from: Sophronius
comment by Sophronius · 2014-07-04T20:34:20.722Z · LW(p) · GW(p)

Okay, you're doing precisely the thing I hate and which I am criticizing about Less Wrong. Allow me to illustrate:

LW1: Guys, it seems to me that Less Wrong is not very rational. What do you think?
LW2: What makes you think Less Wrong isn't rational?
LW1: Well if you take a step back and use the outside view, Less Wrong seems to be optimizing for being clever rather than optimizing for being rational. That's a pretty decent indicator.
LW3: Well, the outside view has theoretical limitations, you know. Eliezer wrote a post about how it is possible to misuse the outside point of view as a conversation stopper.
LW1: Uh, well unless I actually made a mistake in applying the outside view I don't see why that's relevant? And if I did make a mistake in applying it it would be more helpful to say what it was I specifically did wrong in my inference.
LW4: You are misusing the term inference! Here, someone wrote a post about this at some point.
LW5: Yea but that post has theoretical limitations.
LW1: I don't care about any of that, I want to know whether or not Less Wrong is succeeding at being rational. Stop making needlessly theoretical abstract arguments and talk about the actual thing we were actually talking about.
LW6: I agree, people here use LW jargon as as a form of applause light!
LW1: Uh...
LW7: You know, accusing others of using applause lights is a fully generalized counter argument!
LW6: Oh yea? Well fully generalized counter arguments are fully generalized counter arguments themselves, so there!

We're only at LW3 right now so maybe this conversation can still be saved from becoming typical Less Wrong-style meta screwery. Or to make my point more politely: Please tell me whether or not you think Less Wrong is rational and whether or not something should be done, because that's the thing we're actually talking about.

Replies from: Nornagest, dthunt, TheAncientGeek
comment by Nornagest · 2014-07-04T20:39:19.654Z · LW(p) · GW(p)

Dude, my post was precisely about how you're making a mistake in applying the outside view. Was I being too vague, too referential? Okay, here's the long version, stripped of jargon because I'm cool like that.

The point of the planning fallacy experiments is that we're bad at estimating the time we're going to spend on stuff, mainly because we tend to ignore time sinks that aren't explicitly part of our model. My boss asks me how long I'm going to spend on a task: I can either look at all the subtasks involved and add up the time they'll take (the inside view), or I can look at similar tasks I've done in the past and report how long they took me (the outside view). The latter is going to be larger, and it's usually going to be more accurate.

That's a pretty powerful practical rationality technique, but its domain is limited. We have no idea how far it generalizes, because no one (as far as I know) has rigorously tried to generalize it to things that don't have to do with time estimation. Using the outside view in its LW-jargon sense, to describe any old thing, therefore is almost completely meaningless; it's equivalent to saying "this looks to me like a $SCENARIO1". As long as there also exists a $SCENARIO2, invoking the outside view gives us no way to distinguish between them. Underfitting is a problem. Overfitting is also a problem. Which one's going to be more of a problem in a particular reference class? There are ways of figuring that out, like Yvain's centrality heuristic, but crying "outside view" is not one of them.

As to whether LW is rational, I got bored of that kind of hand-wringing years ago. If all you're really looking for is an up/down vote on that, I suggest a poll, which I will probably ignore because it's a boring question.

Replies from: Sophronius
comment by Sophronius · 2014-07-04T21:27:51.200Z · LW(p) · GW(p)

Ok, I guess I could have inferred your meaning from your original post, so sorry if my reply was too snarky. But seriously, if that's your point I would have just made it like this:

"Dude you're only supposed to use the phrase 'outside view' with regards to the planning fallacy, because we don't know if the technique generalizes well."

And then I'd go back and change "take a step back and look at it from the outside view" into "take a step back and look at it from an objective point of view" to prevent confusion, and upvote you for taking the time to correct my usage of the phrase.

comment by dthunt · 2014-07-04T21:47:09.217Z · LW(p) · GW(p)

My guess is that the site is "probably helping people who are trying to improve", because I would expect some of the materials here to help. I have certainly found a number of materials useful.

But a personal judgement probably helping" isn't the kind of thing you'd want. It'd be much better to find some way to measure the size of the effect. Not tracking your progress is a bad, bad sign.

comment by TheAncientGeek · 2014-07-04T20:45:38.756Z · LW(p) · GW(p)

LW8...rationality is more than one thing

comment by dthunt · 2014-07-04T19:46:21.103Z · LW(p) · GW(p)

My apologies, I thought you we referring to how people who do not use this site perceive people using the site, which seemed more likely to be what you were trying to communicate than the alternative.

Yes, the site viewed as a machine does not look like a well-designed rational-people-factory to me, either, unless I've missed the part where it's comparing its output to its input to see how it is performing. People do, however, note cognitive biases and what efforts to work against them have produced, from time to time, and there are other signs that seem consistent with a well-intentioned rational-people-factory.

And, no, not every criticism does. I can only speak for myself, and acknowledge that I have a number of times in the past failed to understand what someone was saying and assumed they were being dumb or somewhat crazy as a result. I sincerely doubt that's a unique experience.

Replies from: dthunt
comment by dthunt · 2014-07-04T20:04:48.361Z · LW(p) · GW(p)

http://lesswrong.com/lw/ec2/preventing_discussion_from_being_watered_down_by/, and other articles, I now read, because they are pertinent, and I want to know what sorts of work have been done to figure out how LW is perceived and why.

comment by David_Gerard · 2014-07-04T10:03:00.925Z · LW(p) · GW(p)

On the other hand, all the rational people I've met are very smart.

Surely you know people of average intelligence who consistently show "common sense" (so rare it's pretty much a superpower). They may not be super-smart, but they're sure as heck not dumb.

Replies from: Sophronius
comment by Sophronius · 2014-07-04T13:57:31.276Z · LW(p) · GW(p)

Common sense does seem like a superpower sometimes, but that's not a real explanation. I think that what we call common sense is mostly just the result of clear thinking and having a distaste for nonsense. If you favour reality over fancies, you are more likely to pay more attention to reality --> better mental habits --> stronger intuition = common sense.

But to answer your question, yes I do know people like that and I do respect them for it (though they still have above average intelligence, mostly). However, I would not trust them with making decisions on anything counter-intuitive like economics, unless they're also really good at knowing what experts to listen to.

Replies from: David_Gerard
comment by David_Gerard · 2014-07-04T16:24:03.012Z · LW(p) · GW(p)

However, I would not trust them with making decisions on anything counter-intuitive like economics, unless they're also really good at knowing what experts to listen to.

Yeah, but I'd say that about the smart people too.

Related, just seen today: The curse of smart people. SPOILER: "an ability to convincingly rationalize nearly anything."

Replies from: XiXiDu, Sophronius
comment by XiXiDu · 2014-07-04T17:11:18.040Z · LW(p) · GW(p)

Related, just seen today: The curse of smart people. SPOILER: "an ability to convincingly rationalize nearly anything."

The AI box experiment seems to support this. People who have been persuaded that it would be irrational to let an unfriendly AI out of the box are being persuaded to let it out of the box.

The ability of smarter or more knowledgeable people to convince less intelligent or less educated people of falsehoods (e.g. parents and children) shows that we need to put less weight on arguments and more weight on falsifiability.

Replies from: Sophronius
comment by Sophronius · 2014-07-04T18:17:21.850Z · LW(p) · GW(p)

I wouldn't use the Ai box experiment as an example for anything because it is specifically designed to be a black box: It's exciting precisely because the outcome confuses the heck out of people. I'm having trouble parsing this in Bayesian terms but I think you're committing a rationalist sin by using an event that your model of reality couldn't predict in advance as evidence that your model of reality is correct.

I strongly agree that we need to put less weight on arguments but I think falsifiability is impractical in everyday situations.

comment by Sophronius · 2014-07-04T18:31:35.653Z · LW(p) · GW(p)

S1) Most smart people aren't rational but most rational people are smart
D1) There are people of average intelligence with common sense
S2) Yes they have good intuition but you cannot trust them with counter-intuitive subjects (people with average intelligence are not rational)
D2) You can't trust smart people with counter-intuitive subjects either (smart people aren't rational)

D2) does not contradict S1 because "most smart people aren't rational" isn't the same as "most rational people aren't smart", which is of course the main point of S1).

Interesting article, it confirms my personal experiences in corporations. However, I think the real problem is deeper than smart people being able to rationalize anything. The real problem is that overconfidence and rationalizing your actions makes becoming a powerful decision-maker easier. The mistakes they make due to irrationality don't catch up with them until after the damage is done, and then the next overconfident guy gets selected.

comment by ialdabaoth · 2014-03-01T17:31:10.567Z · LW(p) · GW(p)

Also, beware signaling games. A good dose of Hansonian cynicism, applied to your own in-group, is healthy.

Not if you want to be accepted by that group. Being bad at signaling games can be crippling - as much as intellectual signaling poisons discourse, it's also the glue that holds a community together enough to make discourse possible.

Example: how likely you are to get away with making a post or comment on signaling games is primarily dependent on how good you are at signaling games, especially how good you are at the "make the signal appear to plausibly be something other than a signal" part of signaling games.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2014-03-03T04:21:33.026Z · LW(p) · GW(p)

You're right, being bad at signaling games can be crippling. The point, though, is to watch out for them and steer away from harmful ones. Actually, I wish I'd emphasized this in the OP: trying to suppress overt signaling games runs the risk of driving them underground, forcing them to be disguised as something else, rather than doing them in a self-aware and fun way.

Replies from: ialdabaoth
comment by ialdabaoth · 2014-03-03T04:25:18.475Z · LW(p) · GW(p)

[T]rying to suppress overt signaling games runs the risk of driving them underground, forcing them to be disguised as something else, rather than doing them in a self-aware and fun way.

Borrowing from the "Guess vs. Tell (vs. Ask)" meta-discussion, then, perhaps it would be useful for the community to have an explicit discussion about what kinds of signals we want to converge on? It seems that people with a reasonable understanding of game theory and evolutionary psychology would stand a better chance deliberately engineering our group's social signals than simply trusting our subconsciouses to evolve the most accurate and honest possible set.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2014-03-03T04:34:36.128Z · LW(p) · GW(p)

The right rule is probably something like, "don't mix signaling games and truth seeking." If it's the kind of thing you'd expect in a subculture that doesn't take itself too seriously or imagine its quirks are evidence of its superiority to other groups, it's probably fine.

comment by [deleted] · 2014-03-05T05:44:30.102Z · LW(p) · GW(p)

But a humble attempt at rationalism is so much less funny...

More seriously, I could hardly agree more with the statement that intelligence has remarkably little to do with susceptibility to irrational ideas. And as much as I occasionally berate others for falling into absurd patterns, I realize that it pretty much has to be true that somewhere in my head is something just as utterly inane that I will likely never be able to see, and it scares me. As such sometimes I think dissensus is not only good, but necessary.

comment by Bugmaster · 2014-03-04T23:50:59.057Z · LW(p) · GW(p)

I've made no secret of the fact that I'm not a big fan of the principle of charity—often defined as the rule that you should interpret other people's arguments on the assumption that they are not saying anything stupid.

As far as I understand, the Principle of Charity is defined differently; it states that you should interpret other people's arguments on the assumption that these people are arguing in good faith. That is to say, you should assume that your interlocutor honestly believes in everything he's saying, and that he has no ulterior motive beyound getting his point across. He may be entirely ignorant, stupid, or both; but he's not a liar or a troll.

This principle allows all parties to focus on the argument, and to stick to the topic at hand -- as opposed to spiraling into the endless rabbit-holes of psychoanalyzing each other.

Replies from: fubarobfusco, AshwinV
comment by fubarobfusco · 2014-03-05T19:31:19.482Z · LW(p) · GW(p)

Wikipedia quotes a few philosophers on the principle of charity:

Blackburn: "it constrains the interpreter to maximize the truth or rationality in the subject's sayings."

Davidson: "We make maximum sense of the words and thoughts of others when we interpret in a way that optimises agreement."

Also, Dennett in The Intentional Stance quotes Quine that "assertions startlingly false on the face of them are likely to turn on hidden differences of language", which seems to be a related point.

comment by AshwinV · 2014-04-25T07:14:28.766Z · LW(p) · GW(p)

Interesting point of distinction.

Irrespective of how you define the principle of charity (i.e. motivation based or intelligence based), I do believe that the principle on the whole should not become a universal guideline and it is important to distinguish it, a sort of "principle of differential charity". This is obviously similar to basic real world things (eg. expertise when it comes to the intelligent charity issue and/or political/official positioning when it comes to the motivation issue).

I also realise that being differentially charitable may come with the risk of becoming even more biased, if you're priors themselves are based on extremely biased findings. However, I would think that by and large it works well, and is a great time saver when deciding how much effort to put into evaluating claims and statements alike.

comment by John_Maxwell (John_Maxwell_IV) · 2014-03-03T07:09:23.150Z · LW(p) · GW(p)

Frankly, there seem to be a lot of people in the LessWrong community who imagine themselves to be, not just more rational than average, but paragons of rationality who other people should accept as such. I've encountered people talking as if it's ridiculous to suggest they might sometimes respond badly to being told the truth about certain subjects. I've encountered people asserting the rational superiority of themselves and others in the community for flimsy reasons, or no reason at all.

I agree with your assessment. My suspicion is that this is due to nth-degree imitations of certain high status people in the LW community who have been rather shameless about speaking in extremely confident tones about things that they are only 70% sure about. The strategy I have resorted to for people like this is asking them/checking if they have PredictionBook account and if not, assuming that they are overconfident just like is common with regular human beings. At some point I'd like to write an extended rebuttal to this post.

To provide counterpoint, however, there are certainly a lot of people who go around confidently saying things who are not as smart or rational as a 5th percentile LWer. So if the 5th percentile LWer is having an argument with one of these people, it's arguably an epistemological win if they are displaying a higher level of confidence than the other person in order to convince bystanders. An LWer friend of mine who is in the habit of speaking very confidently about things made me realize that maybe it was a better idea for me to develop antibodies to smart people speaking really confidently and start speaking really confidently myself than it was for me to get him to stop speaking as confidently.

comment by CCC · 2015-08-18T08:14:59.683Z · LW(p) · GW(p)

Asking someone "what's the evidence for that?" implicitly says, "I don't trust your rationality enough to take your word for it."

Not necessarily. It might be saying "You have an interesting viewpoint; let me see what basis it has, so that I may properly integrate this evidence into my worldview and correctly update my viewpoint"

comment by private_messaging · 2014-04-24T07:37:32.314Z · LW(p) · GW(p)

Particularly problematic is this self congratulatory process:

some simple mistake leads to non mainstream conclusion -> the world is insane and I'm so much more rational than everyone else -> endorphins released -> circuitry involved in mistake-making gets reinforced.

For example: the IQ is the best predictor of job performance, right? So the world is insane that it mostly hires based on experience, test questions, and so on (depending on the field) rather than IQ, right? Cue the endorphins and reinforcement of careless thinking.

If you're not after endorphins, though: IQ is a good predictor of performance within the population of people who got hired traditionally, which is a very different population than the job applicants.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-04-24T08:50:53.407Z · LW(p) · GW(p)

These things can be hard to budge ....they certainly look it ... perhaps because the "Im special" delusion and "world is crazy" delusion need to fall at the same time.

Replies from: private_messaging
comment by private_messaging · 2014-04-24T11:16:52.790Z · LW(p) · GW(p)

Plus in many cases all that had been getting strengthened via reinforcement learning for decades.

It's also ridiculous how easy it is to be special in that imaginary world. Say I want to hire candidates really well - better than competition. I need to figure out the right mix of interview questions and prior experience and so on. I probably need to make my own tests. It's hard! It's harder still if I want to know if my methods work!

But that crazy world, in it, there's a test readily available, widely known, and widely used, and nobody's using it for that, because they're so irrational. And you can know you're special by just going "yeah, it sounds about right". Like coming across 2x+2y=? and going on speculating about the stupid reasons why someone would be unable to apply 2+2=4 and 2*2=4 and conclude it's 4xy .

comment by brazil84 · 2014-03-02T13:10:59.047Z · LW(p) · GW(p)

The problem with this is that other people are often saying something stupid. Because of that, I think charitable is over-rated compared to fair and accurate reading. When someone says something stupid, you don't have to pretend otherwise, but it's really important not to attribute to people stupid things they never said.

Well perhaps you should adopt a charitable interpretation of the principle of charity :) It occurs to me that the phrase itself might not be ideal since "charity" implies that you are giving something which the recipient does not necessarily deserve. Anyway, here's an example which I saw just yesterday:

The context is a discussion board where people argue, among other things, about discrimination against fat people.


Person 1: Answer a question for me: if you were stuck on the 3rd floor of a burning house and passed out, and you had a choice between two firefighter teams, one composed of men who weighted 150-170lbs and one composed of men well above 300, which team would you choose to rescue you?

Person 2: My brother is 6’ 9”, and with a good deal of muscle and a just a little pudge he’d be well over 350 (he’s currently on the thin side, probably about 290 or so). He’d also be able to jump up stairs and lift any-fucking-thing. Would I want him to save me? Hell yes. Gosh, learn to math,


It seems to me the problem here is that Person 2 seized upon an ambiguity in Person 1's question in order to dodge the central point of the question. The Principle of Charity would have required Person 2 to assume that the 300 pound men in the hypothetical were of average height and not 6'9"

I think it's a somewhat important principle because it's very difficult to construct statements and questions without ambiguities which can be seized upon by those who are hostile to one's argument. If I say "the sky is blue," every reasonable person knows what I mean. And it's a waste of everyone's time and energy to make me say something like "The sky when viewed from the surface of the Earth generally appears blue to humans with normal color vision during the daytime when the weather is clear."

So call it whatever you want, the point is that one should be reasonable in interpreting others' statements and questions.

comment by ChrisHallquist · 2014-03-04T05:48:16.396Z · LW(p) · GW(p)

Skimming the "disagreement" tag in Robin Hanson's archives, I found I few posts that I think are particularly relevant to this discussion:

comment by trist · 2014-03-01T16:16:05.854Z · LW(p) · GW(p)

I wonder how much people's interactions with other aspiring rationalists in real life has any effect on this problem. Specifically, I think people who have become/are used to being significantly better at forming true beliefs than everyone around them will tend to discount other people's opinions more.

comment by waveman · 2014-03-01T11:51:30.757Z · LW(p) · GW(p)

Everyone (and every group) thinks they are rational. This is not a distinctive feature of LW. Christianity and Buddhism make a lot of their rationality. Even Nietzsche acknowledged that it was the rationality of Christianity that led to its intellectual demise (as he saw it), as people relentlessly applied rationality tools to Christianity.

My own model of how rational we are is more in line with Ed Seykota's (http://www.seykota.com/tribe/TT_Process/index.htm) than the typical geek model that we are basically rational with a few "biases" added on top. Ed Seykota was a very successful trader, featured in the book "Market Wizards" who concluded that trading success is not that difficult intellectually, the issues are all on the feelings side. He talks about trading but the concepts apply across the board.

For everyone who thinks that they are rational, consider a) Are you in the healthy weight range? b) Did you get the optimum amount of exercise this week? c) Are your retirement savings on track? d) Did you waste zero time today? (I score 2/4).

Personally I think it would be progress if we took as a starting point the assumption that most of the things we believe are not rational. That everything needs to be stringently tested. That taking someone's word for it, unless they have truly earned it, does not make sense.

Also: I totally agree with OP that it is routine to see intelligent people who think of themselves as rational doing things and believing things that are complete nonsense. Intelligence and rationality are, to a first approximation, orthogonal.

Replies from: DanArmak, AspiringRationalist, elharo, None
comment by DanArmak · 2014-03-01T12:27:52.992Z · LW(p) · GW(p)

Everyone (and every group) thinks they are rational. This is not a distinctive feature of LW. Christianity and Buddhism make a lot of their rationality.

To the contrary, lots of groups make a big point of being anti-rational. Many groups (religious, new-age, political, etc.) align themselves in anti-scientific or anti-evidential ways. Most Christians, to make an example, assign supreme importance to (blind) faith that triumphs over evidence.

But more generally, humans are a-rational by default. Few individuals or groups are willing to question their most cherished beliefs, to explicitly provide reasons for beliefs, or to update on new evidence. Epistemic rationality is not the human default and needs to be deliberately researched, taught and trained.

And people, in general, don't think of themselves as being rational because they don't have a well-defined, salient concept of rationality. They think of themselves as being right.

Replies from: brazil84, orbenn, Eugine_Nier
comment by brazil84 · 2014-03-02T17:17:13.333Z · LW(p) · GW(p)

To the contrary, lots of groups make a big point of being anti-rational

Here's a hypothetical for you: Suppose you were to ask a Christian "Do you think the evidence goes more for or more against your belief in Christ?" How do you think a typical Christian would respond? I think most Christians would respond that the evidence goes more in favor of their beliefs.

Replies from: DanArmak
comment by DanArmak · 2014-03-02T18:56:12.642Z · LW(p) · GW(p)

I think the word "evidence" is associated with being pro-science and therefore, in most people's heads, anti-religion. So many Christians would respond by e.g. asking to define "evidence" more narrowly before they committed to an answer.

Also, the evidence claimed in favor of Christianity is mostly associated with the more fundamentalist interpretations; e.g. young-earthers who obsess with clearly false evidence vs. Catholics who accept evolution and merely claim a non-falsifiable Godly guidance. And there are fewer fundamentalists than there are 'moderates'.

However, suppose a Christian responded that the evidence is in the favor of Christianity. And then I would ask them: if the evidence was different and was in fact strongly against Christianity - if new evidence was found or existing evidence disproved - would you change your opinion and stop being a Christian? Would you want to change your opinion to match whatever the evidence turned out to be?

And I think most Christians, by far, would answer that they would rather have faith despite evidence, or that they would rather cling to evidence in their favor and disregard any contrary evidence.

Replies from: brazil84, CCC
comment by brazil84 · 2014-03-02T20:45:43.977Z · LW(p) · GW(p)

And I think most Christians, by far, would answer that they would rather have faith despite evidence, or that they would rather cling to evidence in their favor and disregard any contrary evidence

I doubt it. That may be how their brains work, but I doubt they would admit that they would cling to beliefs against the evidence. More likely they would insist that such a situation could never happen; that the contrary evidence must be fraudulent in some way.

I actually did ask the questions on a Christian bulletin board this afternoon. The first few responses have been pretty close to my expectations; we will see how things develop.

Replies from: DanArmak
comment by DanArmak · 2014-03-03T08:28:40.817Z · LW(p) · GW(p)

More likely they would insist that such a situation could never happen; that the contrary evidence must be fraudulent in some way.

That is exactly why I would label them not identifying as "rational". A rational person follows the evidence, he does not deny it. (Of course there are meta-rules, preponderance of evidence, independence of evidence, etc.)

I actually did ask the questions on a Christian bulletin board this afternoon. The first few responses have been pretty close to my expectations; we will see how things develop.

Upvoted for empirical testing, please followup!

However, I do note that 'answers to a provocative question on a bulletin board, without the usual safety guards of scientific studies' won't be very strong evidence about 'actual beliefs and/or behavior of people in hypothetical future situations'.

Replies from: brazil84
comment by brazil84 · 2014-03-03T10:15:42.108Z · LW(p) · GW(p)

That is exactly why I would label them not identifying as "rational". A rational person follows the evidence, he does not deny it.

That's not necessarily true and I can illustrate it with an example from the other side. A devout atheist once told me that even if The Almighty Creator appeared to him personally; performed miracles; etc., he would still remain an atheist on the assumption that he was hallucinating. One can ask if such a person thinks of himself as anti-rational given his pre-announcement that he would reject evidence that disproves his beliefs. Seems to me the answer is pretty clearly "no" since he is still going out of his way to make sure that his beliefs are in line with his assessment of the evidence.

Upvoted for empirical testing, please followup!

However, I do note that 'answers to a provocative question on a bulletin board, without the usual safety guards of scientific studies' won't be very strong evidence about 'actual beliefs and/or behavior of people in hypothetical future situations'.

Well I agree it's just an informal survey. But I do think it's pretty revealing given the question on the table:

Do Christians make a big point of being anti-rational?

Here's the thread:

http://www.reddit.com/r/TrueChristian/comments/1zd9t1/does_the_evidence_support_your_beliefs/

Of 4 or 5 responses, I would say that there is 1 where the poster sees himself as irrational.

Anyway, the original claim which sparked this discussion is that everyone thinks he is rational. Perhaps a better way to put it is that it's pretty unusual for anyone to think his beliefs are irrational.

Replies from: DanArmak
comment by DanArmak · 2014-03-03T13:01:29.364Z · LW(p) · GW(p)

A devout atheist once told me that even if The Almighty Creator appeared to him personally; performed miracles; etc., he would still remain an atheist on the assumption that he was hallucinating.

And I wouldn't call that person rational, either. He may want to be rational, and just be wrong about the how.

One can ask if such a person thinks of himself as anti-rational given his pre-announcement that he would reject as "not rational" or "not wanting to be rational" if they disagree.

I think the relevant (psychological and behavioral) difference here is between not being rational, i.e. not always following where rationality might lead you or denying a few specific conclusions, and being anti-rational, which I would describe as seeing rationality as an explicit enemy and therefore being against all things rational by association.

ETA: retracted. Some Christians are merely not rational, but some groups are explicitly anti-rational: they attack rationality, science, and evidence-based reasoning by association, even when they don't disagree with the actual evidence or conclusions.

The Reddit thread is interesting. 5 isn't a big sample, and we got examples basically of all points of view. My prediction was that:

most Christians, by far, would answer that they would rather have faith despite evidence, or that they would rather cling to evidence in their favor and disregard any contrary evidence.

By my count, of those Reddit respondents who explicitly answered the question, these match the prediction, given the most probable interpretation of their words: Luc-Pronounced_Luke, tinknal. EvanYork comes close but doesn't explicitly address the hypothetical.

And these don't: Mageddon725, rethcir_, Va1idation.

So my prediction of 'most' is falsified, but the study is very underpowered :-)

Anyway, the original claim which sparked this discussion is that everyone thinks he is rational. Perhaps a better way to put it is that it's pretty unusual for anyone to think his beliefs are irrational.

I agree that it's unusual. My original claim was that many more people don't accept rationality as a valid or necessary criterion and don't even try to evaluate their beliefs' rationality. They don't see themselves as irrational, but they do see themselves as "not rational". And some of them further see themselves as anti-rational, and rationality as an enemy philosophy or dialectic.

Replies from: brazil84
comment by brazil84 · 2014-03-03T14:06:05.018Z · LW(p) · GW(p)

And I wouldn't call that person rational, either. He may want to be rational, and just be wrong about the how.

Well he might be rational and he might not be, but pretty clearly he perceives himself to be rational. Or at a minimum, he does not perceive himself to be not rational. Agreed?

Some Christians are merely not rational, but some groups are explicitly anti-rational: they attack rationality, science, and evidence-based reasoning by association, even when they don't disagree with the actual evidence or conclusions.

Would you mind providing two or three quotes from Christians which manifest this attitude so I can understand and scrutinize your point?

The Reddit thread is interesting. 5 isn't a big sample, and we got examples basically of all points of view.

That's true. But I would say that of the 5, there was only one individual who doesn't perceive himself to be rational. Two pretty clearly perceive themselves to be rational. And two are in a greyer area but pretty clearly would come up with rationalizations to justify their beliefs. Which is irrational but they don't seem to perceive it as such.

I agree that it's unusual. My original claim was that many more people don't accept rationality as a valid or necessary criterion and don't even try to evaluate their beliefs' rationality.

Well, I agree that a lot of people might not have a clear opinion about whether their beliefs are rational. But the bottom line is that when push comes to shove, most people seem to believe that their beliefs are a reasonable evidence-based conclusion.

But I am interested to see quotes from these anti-rational Christians you refer to.

Replies from: DanArmak
comment by DanArmak · 2014-03-04T08:41:40.636Z · LW(p) · GW(p)

After some reflection, and looking for evidence, it seems I was wrong. I felt very certain of what I said, but then I looked for justification and didn't find it. I'm sorry I led this conversation down a false trail. And thank you for questioning my claims and doing empirical tests.

(To be sure, I found some evidence, but it doesn't add up to large, numerous, or representative groups of Christians holding these views. Or in fact for these views being associated with Christianity more than other religions or non-religious 'mystical' or 'new age' groups. Above all, it doesn't seem these views have religion as their primary motivation. It's not worth while looking into the examples I found if they're not representative of larger groups.)

comment by CCC · 2014-03-03T14:31:43.632Z · LW(p) · GW(p)

Well, as a Christian myself, allow me to provide a data point for your questions:

"Do you think the evidence goes more for or more against your belief in Christ?"

(from the grandparent post) More for.

young-earthers who obsess with clearly false evidence

Young-earthers fall into a trap; there are parts of the Bible that are not intended to be taken literally (Jesus' parables are a good example). Genesis (at least the garden-of-eden section) is an example of this.

And then I would ask them: if the evidence was different and was in fact strongly against Christianity - if new evidence was found or existing evidence disproved - would you change your opinion and stop being a Christian?

It would have to be massively convincing evidence. I'm not sure that sufficient evidence can be found (but see next answer). I've seen stage magicians do some amazing things; the evidence would have to be convincing enough to convince me that it wasn't someone, with all the skills of David Copperfield, intentionally pulling the wool over my eyes in some manner.

Would you want to change your opinion to match whatever the evidence turned out to be?

In the sense that I want my map to match the territory; yes. In the sense that I do not want the territory to be atheistic; no.

I wouldn't mind so much if it turned out that (say) modern Judaism was 100% correct instead; it would be a big adjustment, but I think I could handle that much more easily. But the idea that there's nothing in the place of God; the idea that there isn't, in short, someone running the universe is one that I find extremely disquieting for some reason.

I imagine it's kindof like the feeling one might get, imagining the situation of being in a chauffeur-driven bus, travelling at full speed, along with the rest of humanity, and suddenly discovering that there's no-one behind the steering wheel and no-one on the bus can get into the front compartment.

...extremely disquieting.

Replies from: Viliam_Bur, Sophronius, Bugmaster
comment by Viliam_Bur · 2014-03-04T07:50:44.365Z · LW(p) · GW(p)

the idea that there isn't, in short, someone running the universe is one that I find extremely disquieting for some reason

If feels the same to me; I just believe it's true.

I imagine it's kindof like the feeling one might get, imagining the situation of being in a chauffeur-driven bus, travelling at full speed, along with the rest of humanity, and suddenly discovering that there's no-one behind the steering wheel and no-one on the bus can get into the front compartment.

Let's continue the same metaphor and imagine that many people in the bus decide to pretend that there is an invisible chauffeur and therefore everything is okay. This idea allows them to relax; at least partially (because parts of their minds are aware that the chauffeur should not be invisible, because that doesn't make much sense). And whenever someone in the bus suggests that we should do our best to explore the bus and try getting to the front compartment, these people become angry and insist that such distrust of our good chauffeur is immoral, and getting to the front compartment is illegal. Instead we should just sit quietly and sing a happy song together.

Replies from: CCC
comment by CCC · 2014-03-04T08:49:19.052Z · LW(p) · GW(p)

Let's continue the same metaphor and imagine that many people in the bus decide to pretend that there is an invisible chauffeur and therefore everything is okay. This idea allows them to relax; at least partially (because parts of their minds are aware that the chauffeur should not be invisible, because that doesn't make much sense). And whenever someone in the bus suggests that we should do our best to explore the bus and try getting to the front compartment, these people become angry and insist that such distrust of our good chauffeur is immoral, and getting to the front compartment is illegal. Instead we should just sit quietly and sing a happy song together.

...I'm not sure this metaphor can take this sort of strain. (Of course, it makes a difference if you can see into the front compartment; I'd assumed an opaque front compartment that couldn't be seen into from the rest of the bus).

Personally, I don't have any problem with people trying to, in effect, get into the front compartment. As long as it's done in an ethical way, of course (so, for example, if it involves killing people, then no; but even then, what I'd object to is the killing, not the getting-into-the-front). I do think it makes a lot of sense to try to explore the rest of the bus; the more we find out about the universe, the more effect we can have on it; and the more effect we can have on the universe, the more good we can do. (Also, the more evil we can do; but I'm optimistic enough to believe that humanity is more good than evil, on balance. Despite the actions of a few particularly nasty examples).

As I like to phrase it: God gave us brains. Presumably He expected us to use them.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-03-04T10:32:05.662Z · LW(p) · GW(p)

I assumed the front compartment was completely opaque in the past, and parts of it are gradually made transparent by science. Some people, less and less credibly, argue that the chauffeur has a weird body shape and still may be hidden behing the remaining opaque parts. But the smarter ones can already predict where this goes, so they already hypothesise an invisible chauffeur (separate magisteria, etc.). Most people probably believe some mix, like the chauffeur is partially transparent and partially visible, and the transparent and visible parts of the chauffeur's body happen to correspond to the parts they can and cannot see from their seats.

Okay, I like your attitude. You probably wouldn't ban teaching evolutionary biology at schools.

Replies from: CCC
comment by CCC · 2014-03-04T13:08:36.190Z · LW(p) · GW(p)

I think this is the point at which the metaphor has become more of an impediment to communication than anything else. I recognise what I think you're referring to; it's the idea of the God of the gaps (in short, the idea that God is responsible for everything that science has yet to explain; which starts leading to questions as soon as science explains something new).

As an argument for theism, the idea that God is only responsible for things that haven't yet been otherwise explained is pretty thoroughly flawed to start with. (I can go into quite a bit more detail if you like).

Okay, I like your attitude. You probably wouldn't ban teaching evolutionary biology at schools.

No, I most certainly would not. Personally, I think that the entire evolution debate has been hyped up to an incredible degree by a few loud voices, for absolutely no good reason; there's nothing in the theory of evolution that runs contrary to the idea that the universe is created. Evolution just gives us a glimpse at the mechanisms of that creation.

comment by Sophronius · 2014-03-06T21:55:15.967Z · LW(p) · GW(p)

I imagine it's kindof like the feeling one might get, imagining the situation of being in a chauffeur-driven bus, travelling at full speed, along with the rest of humanity, and suddenly discovering that there's no-one behind the steering wheel and no-one on the bus can get into the front compartment.

This is precisely how I feel about humanity. I mean, we came from within a hair's breadth of annihilating all human life on the planet during the cold war, for pete's sake. Now that didn't come to pass, but if you look at all the atrocities that did happen during the history of humanity... even if you're right and there is a driver, he is most surely drunk behind the wheel.

Still, I can sympathise. After all, people also generally prefer to have an actual person piloting their plane, even if the auto-pilot is better (or so I've read). There seems to be some primal desire to want someone to be in charge. Or as the Joker put it: "Nobody panics when things go according to plan. Even if that plan is horrifying."

Replies from: CCC
comment by CCC · 2014-03-07T08:23:49.433Z · LW(p) · GW(p)

I mean, we came from within a hair's breadth of annihilating all human life on the planet during the cold war, for pete's sake. Now that didn't come to pass, but if you look at all the atrocities that did happen during the history of humanity...

Atrocities in general are a point worth considering. They make it clear that, even given the existence of God, there's a lot of agency being given to the human race; it's up to us as a race to not mess up totally, and to face the consequences of the actions of others.

comment by Bugmaster · 2014-03-05T01:41:00.460Z · LW(p) · GW(p)

I find your post very interesting, because I tend to respond almost exactly the same way when someone asks me why I'm an atheist. The one difference is the "extremely disquieting" part; I find it hard to relate to that. From my point of view, reality is what it is; i.e., it's emotionally neutral.

Anyway, I find it really curious that we can disagree so completely while employing seemingly identical lines of reasoning. I'm itching to ask you some questions about your position, but I don't want to derail the thread, or to give the impression of getting all up in your business, as it were...

Replies from: CCC
comment by CCC · 2014-03-05T04:21:02.637Z · LW(p) · GW(p)

Reality stops being emotionally neutral when it affects me directly. If I were to wake up to find that my bed has been moved to a hovering platform over a volcano, then I will most assuredly not be emotionally neutral about the discovery (I expect I would experience shock, terror, and lots and lots of confusion).

I'm itching to ask you some questions about your position

Well, I'd be quite willing to answer them. Maybe you could open up a new thread in Discussion, and link to it from here?

comment by orbenn · 2014-03-01T17:16:22.170Z · LW(p) · GW(p)

I think we're getting some word-confusion. Groups that claim "make a big point of being anti-rational" are against the things with the label "rational". However they do tend to think of their own beliefs as being well thought out (i.e. rational).

Replies from: DanArmak
comment by DanArmak · 2014-03-01T18:39:04.773Z · LW(p) · GW(p)

No, I think we're using words the same way. I disagree with your statement that all or most groups "think of their own beliefs as being well thought out (i.e. rational).". They think of their beliefs of being right, but not well thought out.

"Well thought out" should mean:

  1. Being arrived at through thought (science, philosophy, discovery, invention), rather than writing the bottom line first and justifying it later or not at all (revelation, mysticism, faith deliberately countering evidence, denial of the existence of objective truth).
  2. Thought out to its logical consequences, without being selective about which conclusions you adopt or compartmentalizing them, making sure there are no internal contradictions, and dealing with any repugnant conclusions.
comment by Eugine_Nier · 2014-03-01T19:16:45.278Z · LW(p) · GW(p)

Most Christians, to make an example, assign supreme importance to (blind) faith that triumphs over evidence.

That's not what most Christians mean by faith.

Replies from: DanArmak
comment by DanArmak · 2014-03-01T21:56:13.185Z · LW(p) · GW(p)

The comment you link to gives a very interesting description of faith:

The sense of "obligation" in faith is that of duty, trust, and deference to those who deserve it. If someone deserves our trust, then it feels wrong, or insolent, or at least rude, to demand independent evidence for their claims.

I like that analysis! And I would add: obligation to your social superiors, and to your actual legal superiors (in a traditional society), is a very strong requirement and to deny faith is not merely to be rude, but to rebel against the social structure which is inseparable from institutionalized religion.

However, I think this is more of an explanation of how faith operates, not what it feels like or how faithful people describe it. It's a good analysis of the social phenomenon of faith from the outside, but it's not a good description of how it feels from the inside to be faithful.

This is because the faith actually required of religious people is faith in the existence of God and other non-evident truths claimed by their religion. As a faithful person, you can't feel faith is "duty, trust, obligation" - you feel that is is belief. You can't feel that to be unfaithful would be to wrong someone or to rebel; you feel that it would be to be wrong about how the world really is.

However, I've now read Wikipedia on Faith in Christianity and I see there are a lot of complex opinions about the meaning of this word. So now I'm less sure of my opinion. I'm still not convinced that most Christians mean "duty, trust, deference" when they say "faith", because WP quotes many who disagree and think it means "belief".

comment by NoSignalNoNoise (AspiringRationalist) · 2014-03-06T01:58:44.161Z · LW(p) · GW(p)

For everyone who thinks that they are rational, consider a) Are you in the healthy weight range? b) Did you get the optimum amount of exercise this week? c) Are your retirement savings on track? d) Did you waste zero time today? (I score 2/4).

That sentence motivated me to overcome the trivial inconvenience of logging in on my phone so I could up vote it.

comment by elharo · 2014-03-01T19:54:17.898Z · LW(p) · GW(p)

a) Why do you expect a rational person would necessarily avoid the environmental problems that cause overweight and obesity? Especially given that scientists are very unclear amongst themselves as to what causes obesity and weight gain? Even if you adhere to the notion that weight gain and loss is simply a matter of calorie consumption and willpower, why would you assume a rational person has more willpower?

b) Why do you expect that a rational person would necessarily value the optimum amount of exercise (presumably optimal for health) over everything else they might have done with their time this week? And again given that scientists have even less certainty about the optimum amount or type of exercise, than they do about the optimum amount of food we should eat.

c) Why do you assume that a rational person is financially able to save for retirement? There are many people on this planet who live on less than a dollar a day. Does being born poor imply a lack of rationality?

d) Why do you assume a rational person does not waste time on occasion?

Rationality is not a superpower. It does not magically produce health, wealth, or productivity. It may assist in the achievement of those and other goals, but it is neither necessary nor sufficient.

Replies from: AspiringRationalist, brazil84, Vaniver, brazil84
comment by NoSignalNoNoise (AspiringRationalist) · 2014-03-06T02:11:05.364Z · LW(p) · GW(p)

c) Why do you assume that a rational person is financially able to save for retirement? There are many people on this planet who live on less than a dollar a day. Does being born poor imply a lack of rationality?

The question was directed at people discussing rationality on the internet. If you can afford some means of internet access, you are almost certainly not living on less than a dollar a day.

Replies from: CAE_Jones
comment by CAE_Jones · 2014-03-06T04:32:31.814Z · LW(p) · GW(p)

I receive less in SSI than I'm paying on college debt (no degree), am legally blind, unemployed, and have internet access because these leave me with no choice but to live with my parents (no friends within 100mi). Saving for retirement is way off my radar.

(I do have more to say on how I've handled this, but it seems more appropriate for the rationality diaries. I will ETA a link if I make such a comment.)

comment by brazil84 · 2014-03-02T17:51:02.255Z · LW(p) · GW(p)

Why do you expect a rational person would necessarily avoid the environmental problems that cause overweight and obesity? Especially given that scientists are very unclear amongst themselves as to what causes obesity and weight gain? Even if you adhere to the notion that weight gain and loss is simply a matter of calorie consumption and willpower, why would you assume a rational person has more willpower?

A more rational person might have a better understanding of how his mind works and use that understanding to deploy his limited willpower to maximum effect.

comment by Vaniver · 2014-03-02T09:32:04.632Z · LW(p) · GW(p)

d) Why do you assume a rational person does not waste time on occasion?

Even if producing no external output, one can still use time rather than waste it. waveman's post is about the emotional difficulties of being effective- and so to the extent that rationality is about winning, a rational person has mastered those difficulties.

comment by brazil84 · 2014-03-02T23:36:33.899Z · LW(p) · GW(p)

Why do you expect that a rational person would necessarily value the optimum amount of exercise (presumably optimal for health) over everything else they might have done with their time this week?

Most likely because getting regular exercise is a pretty good investment of time. Of course some people might rationally choose not to make the investment for whatever reason, but if someone doesn't exercise regularly there is an excellent chance that it's akrasia at work.

One can ask if rational people are less likely to fall victim to akrasia. My guess is that they are, since a rational person is likely to have a better understanding of how his brain works. So he is in a better position to come up with ways to act consistently with his better judgment.

comment by [deleted] · 2014-03-02T16:17:23.235Z · LW(p) · GW(p)

For everyone who thinks that they are rational, consider a) Are you in the healthy weight range? b) Did you get the optimum amount of exercise this week? c) Are your retirement savings on track? d) Did you waste zero time today? (I score 2/4).

I wasted some time today. Is 3-4 times per week of strength training and 1/2 hour cardio enough exercise? Then I think I get 3/4. Woot, but I actually don't see the point of the exercise, since I don't even aspire to be perfectly rational (especially since I don't know what I would be perfectly rational about).

comment by somervta · 2014-03-01T09:03:52.925Z · LW(p) · GW(p)

sake-handling -> snake-handling

Anecdotally, I feel like I treat anyone on LW as someone to take much more seriously because f that, it's just not different enough for any of the things-perfect-rationalists-should-do to start to apply.

comment by Wes_W · 2014-03-03T19:44:05.300Z · LW(p) · GW(p)

Excellent post. I don't have anything useful to add at the moment, but I am wondering if the second-to-last paragraph:

First, yes, some claims are more rational than others. Some people even do better at rationality overall than others. But the idea of a real person being anything close to an ideal rationalist is an extraordinary claim, and should be met with appropriate skepticism and demands for evidence. Don't forget that

is just missing a period at the end, or has a fragmented sentence.

comment by brazil84 · 2014-03-02T21:17:55.129Z · LW(p) · GW(p)

By the way, I agree with you that there is a problem with rationalists who are a lot less rational than they realize.

What would be nice is if there were a test for rationality just like one can test for intelligence. It seems that it would be hard to make progress without such a test.

Unfortunately there would seem to be a lot of opportunity for a smart but irrational person to cheat on such a test without even realizing it. For example, if it were announced that atheism is a sign of rationality, our hypothetical smart but irrational person would proudly announce his atheism and would tell himself and others that he is an atheist because he is a smart, rational person and that's how he has processed the evidence.

Another problem is that there is no practical way to assess the rationality of the person who is designing the rationality test.

Someone mentioned weight control as a rationality test. This is an intriguing idea -- I do think that self-deception plays an important role in obesity. I would like to think that in theory, a rational fat person could think about the way his brain and body work; create a reasonably accurate model; and then develop and implement a strategy for weight loss based on his model.

Perhaps some day you will be able to wear a mood-ring type device which beeps whenever you are starting to engage in self-deception.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-03-03T15:02:07.933Z · LW(p) · GW(p)

if it were announced that atheism is a sign of rationality, our hypothetical smart but irrational person would proudly announce his atheism

Rationality tests shouldn't be about professing things; not even things correlated with rationality. Intelligence tests also aren't about professing intelligent things (whatever those would be), they are about solving problems. Analogically, rationality tests should require people to use rationality to solve novel situations, not just guess the teacher's password.

there is no practical way to assess the rationality of the person who is designing the rationality test

If the test depends too much on trusting the rationality of the person designing the test, they are doing it wrong. Again, IQ tests are not made by finding the highest-IQ people on the planet and telling them: "Please use your superior rationality in ways incomprehensive to us mere mortals to design a good IQ test."

Both intelligence and rationality are necessary in designing an IQ test or a rationality test, but that's in a similar way that intelligence and rationality are necessary to design a new car. The act of designing requires brainpower; but it's not generally true that tests of X must be designed by people with high X.

Replies from: brazil84
comment by brazil84 · 2014-03-03T15:37:30.034Z · LW(p) · GW(p)

Analogically, rationality tests should require people to use rationality to solve novel situations, not just guess the teacher's password.

I agree with this. But I can't think of such a rationality test. I think part of the problem is that a smart but irrational person could use his intelligence to figure out the answers that a rational person would come up with and then choose those answers.

On an IQ test, if you are smart enough to figure out the answers that a smart person would choose, then you yourself must be pretty smart. But I don't think the same thing holds for rationality.

If the test depends too much on trusting the rationality of the person designing the test, they are doing it wrong.

Well yes, but it's hard to think of how to do it right. What's an example of a question you might put on a rationality test?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-03-04T08:34:27.694Z · LW(p) · GW(p)

I agree that rationality tests will be much more difficult than IQ tests. First, we already have the IQ tests so if we tried to create a new one, we already know what to do and what to expect. Second, the rationality tests can be inherently more difficult.

Still I think that if we look at the history of the IQ tests, we can take some lessons from there. I mean; imagine that there are no IQ tests yet, and you are supposed to invent the first one. The task would probably seem impossible, and there would be similar objections. Today we know that the first IQ tests got a few things wrong. And we also know that the "online IQ tests" are nonsense from the psychometrics point of view, but to people without psychological education they seem right, because their intuitive idea of IQ is "being able to answer difficult questions invented by other intelligent people", when if fact the questions in Raven's progressive matrices are rather simple.

20 years later we may have analogical knowledge about the rationality tests, and some things may seem obvious in hindsight. At this moment, while respecting that intelligence is not the same thing as rationality, IQ tests are the outside-view equivalent I will use for making guesses, because I have no better analogy.

The IQ tests were first developed for small children. The original purpose of the early IQ tests was to tell whether a 6 years old child is ready to go to elementary school, or whether we should give them another year. They probably even weren't called IQ tests yet, but school readiness tests. Only later was the idea of some people being "smarter/dumber for their age" generalized to all ages.

Analogically, we could probably start measuring rationality where it is easiest; on children. I'm not saying it will be easy, just easier that with adults. Many of the small children's logical mistakes will be less politically controversial. And it is easier to reason about the mistakes that you are already not prone to making. Some of the things we learn on children may be later useful also for studying adults.

Within intelligence, there was a controversy (and some people still try to keep it alive) whether "intelligence" is just one thing, or many different things (multiple intelligences). There will be analogical questions about "rationality". And the proper way to answer these questions is to create tests for individual hypothetical components, and then to gather the data and see how these abilities correlate. Measurement and math; not speculation. Despite making an analogy here, I am not saying the answer will be the same. Maybe "resisting peer pressure" and "updating on new evidence" and "thinking about multiple possibilities before choosing and defending one of them" and "not having a strong identity that dictates all answers" will strongly correlate with each other; maybe they will be independent or even contradictory; maybe some of them will correlate together and the other will not, so we get two or three clusters of traits. This is an empirical question and must be answered my measurement.

Some of the intelligence tests in the past were strongly culturally biased (e.g. contained questions from history or literature, knowledge of proverbs or cultural norms), some of them required specific skills (e.g. mathematical). But some of them were not. Now that we have many different solutions, we can pick the less biased ones. But even the old ones were better than nothing; useful approximations within a given cultural group. If the first rationality tests will be similarly flawed, that also will not mean the entire field is doomed; later the tests an be improved, the heavily culture-specific questions removed, getting closer to the abstract essence of rationality.

I agree there is a risk that an irrational person might have a good model of what would a rational person do (while it is impossible for a stupid person to predict how a smart person would solve a difficult problem). I can imagine a smart religious fanatic thinking: "What would HJPEV, the disgusting little heathen, do in this situation?" and running a rationality routine in a sandbox. In that case, the best we could achieve would be the tests measuring someone's capacity to think rationally if they choose to. Such person could still later become an ugly surprise. Well... I suppose we just have to accept this, and add it to the list of warnings of what the rationality tests don't show.

As an example of the questions in tests; I would probably not try to test "rationality" as a whole in a single answer, but make separate answers focused on each component. For example, a test of resisting peer pressure would describe a story where one person provides a good evidence for X, but many people provide obviously bad reasoning for Y; and you have to choose which is more likely. For a test of updating, I would provide multiple pieces of evidence, where the first three point towards an answer X, but the following seven point towards an answer Y, and might even contain explanation why the first three pieces were misleading. The reader would be asked to write an answer answer reading first three pieces, and after reading all of them. For seeing multiple solutions, I would present some puzzle with multiple solutions, and task would be to find within a time limit as much as possible.

Each of these questions has some obvious flaws. But, analogically with the IQ tests, I believe the correct approach is to try dozens of flawed questions, gather data, and see how much they correlate with each other, make a factor analysis, gradually replace them with more pure versions, etc.

Replies from: brazil84
comment by brazil84 · 2014-03-04T10:05:27.913Z · LW(p) · GW(p)

Still I think that if we look at the history of the IQ tests, we can take some lessons from there. I mean; imagine that there are no IQ tests yet, and you are supposed to invent the first one. The task would probably seem impossible, and there would be similar objections.

It's hard to say given that we have the benefit of hindsight, but at least we wouldn't have to deal with what I believe to be the killer objection -- that irrational people would subconsciously cheat if they know they are being tested.

If the first rationality tests will be similarly flawed, that also will not mean the entire field is doomed; later the tests an be improved, the heavily culture-specific questions removed, getting closer to the abstract essence of rationality.

I agree, but that still doesn't get you any closer to overcoming the problem I described.

I agree there is a risk that an irrational person might have a good model of what would a rational person do (while it is impossible for a stupid person to predict how a smart person would solve a difficult problem). I can imagine a smart religious fanatic thinking: "What would HJPEV, the disgusting little heathen, do in this situation?" and running a rationality routine in a sandbox. In that case, the best we could achieve would be the tests measuring someone's capacity to think rationally if they choose to.

To my mind that's not very helpful because the irrational people I meet have been pretty good at thinking rationally if they choose to. Let me illustrate with a hypothetical: Suppose you meet a person with a fervent belief in X, where X is some ridiculous and irrational claim. Instead of trying to convince them that X is wrong, you offer them a bet, the outcome of which is closely tied to whether X is true or not. Generally they will not take the bet. And in general, when you watch them making high or medium stakes decisions, they seem to know perfectly well -- at some level -- that X is not true.

Of course not all beliefs are capable of being tested in this way, but when they can be tested the phenomenon I described seems pretty much universal. The reasonable inference is that irrational people are generally speaking capable of rational thought. I believe this is known as "standby rationality mode."

Replies from: TheOtherDave, Viliam_Bur
comment by TheOtherDave · 2014-03-04T15:29:44.505Z · LW(p) · GW(p)

I agree with you that people who assert crazy beliefs frequently don't behave in the crazy ways those beliefs would entail.

This doesn't necessarily mean they're engaging in rational thought.

For one thing, the real world is not that binary. If I assert a crazy belief X, but I behave as though X is not true, it doesn't follow that my behavior is sane... only that it isn't crazy in the specific way indicated by X. There are lots of ways to be crazy.

More generally, though... for my own part what I find is that most people's betting/decision making behavior is neither particularly "rational" nor "irrational" in the way I think you're using these words, but merely conventional.

That is, I find most people behave the way they've seen their peers behaving, regardless of what beliefs they have, let alone what beliefs they assert (asserting beliefs is itself a behavior which is frequently conventional). Sometimes that behavior is sane, sometimes it's crazy, but in neither case does it reflect sanity or insanity as a fundamental attribute.

You might find yvain's discussion of epistemic learned helplessness enjoyable and interesting.

Replies from: brazil84
comment by brazil84 · 2014-03-04T19:47:20.174Z · LW(p) · GW(p)

This doesn't necessarily mean they're engaging in rational thought.

For one thing, the real world is not that binary. If I assert a crazy belief X, but I behave as though X is not true, it doesn't follow that my behavior is sane... only that it isn't crazy in the specific way indicated by X. There are lots of ways to be crazy.

More generally, though... for my own part what I find is that most people's betting/decision making behavior is neither particularly "rational" nor "irrational" in the way I think you're using these words, but merely conventional.

That is, I find most people behave the way they've seen their peers behaving, regardless of what beliefs they have, let alone what beliefs they assert (asserting beliefs is itself a behavior which is frequently conventional)

That may very well be true . . .I'm not sure what it says about rationality testing. If there is a behavior which is conventional but possibly irrational, it might not be so easy to assess its rationality. And if it's conventional and clearly irrational, how can you tell if a testee engages in it? Probably you cannot trust self-reporting.

Replies from: TheOtherDave, Lumifer
comment by TheOtherDave · 2014-03-04T19:52:54.781Z · LW(p) · GW(p)

A lot of words are getting tossed around here whose meanings I'm not confident I understand. Can you say what it is you want to test for, here, without using the word "rational" or its synonyms? Or can you describe two hypothetical individuals, one of whom you'd expect to pass such a test and the other you'd expect to fail?

Replies from: brazil84
comment by brazil84 · 2014-03-04T20:45:39.091Z · LW(p) · GW(p)

Our hypothetical person believes himself to be very good at not letting his emotions and desires color his judgments. However his judgments are heavily informed by these things and then he subconsciously looks for rationalizations to justify them. He is not consciously aware that he does this.

Ideally, he should fail the rationality test.

Conversely, someone who passes the test is someone who correctly believes that his desires and emotions have very little influence over his judgments.

Does that make sense?

And by the way, one of the desires of Person #1 is to appear "rational" to himself and others. So it's likely he will subconsiously attempt to cheat on any "rationality test. "

Replies from: TheOtherDave
comment by TheOtherDave · 2014-03-04T21:19:23.878Z · LW(p) · GW(p)

Yeah, that helps.

If I were constructing a test to distinguish person #1 from person #2, I would probably ask for them to judge a series of scenarios that were constructed in such a way that formally, the scenarios were identical, but each one had different particulars that related to common emotions and desires, and each scenario was presented in isolation (e.g., via a computer display) so it's hard to go back and forth and compare.

I would expect P2 to give equivalent answers in each scenario, and P1 not to (though they might try).

Replies from: brazil84
comment by brazil84 · 2014-03-04T22:41:18.772Z · LW(p) · GW(p)

I would expect P2 to give equivalent answers in each scenario, and P1 not to (though they might try)

I doubt that would work, since P1 most likely has a pretty good standby rationality mode which can be subconsciously invoked if necessary.

But can you give an example of two such formally identical scenarios so I can think about it?

Replies from: TheOtherDave
comment by TheOtherDave · 2014-03-05T03:48:44.173Z · LW(p) · GW(p)

It's a fair question, but I don't have a good example to give you, and constructing one would take more effort than I feel like putting into it. So, no, sorry.

That said, what you seem to be saying is that P2 is capable of making decisions that aren't influenced by their emotions and desires (via "standby rationality mode") but does not in fact do so except when taking rationality tests, whereas P1 is capable of it and also does so in real life.

If I've understood that correctly, then I agree that no rationality test can distinguish P1 and P2's ability to make decisions that aren't influenced by their emotions and desires.

Replies from: brazil84
comment by brazil84 · 2014-03-05T09:06:03.668Z · LW(p) · GW(p)

It's a fair question, but I don't have a good example to give you, and constructing one would take more effort than I feel like putting into it. So, no, sorry.

That's unfortunate, because this strikes me as a very important issue. Even being able to measure one's own rationality would be very helpful, let alone that of others.

That said, what you seem to be saying is that P2 is capable of making decisions that aren't influenced by their emotions and desires (via "standby rationality mode") but does not in fact do so except when taking rationality tests, whereas P1 is capable of it and also does so in real life.

I'm not sure I would put it in terms of "making decisions" so much as "making judgments," but basically yes. Also, P1 does make rational judgments in real life but the level of rationality depends on what is at stake.

If I've understood that correctly, then I agree that no rationality test can distinguish P1 and P2's ability to make decisions that aren't influenced by their emotions and desires.

Well one idea is to look more directly at what is going on in the brain with some kind of imaging technique. Perhaps self-deception or result-oriented reasoning have a tell tale signature.

Also, perhaps this kind of irrationality is more cognitively demanding. To illustrate, suppose you are having a Socratic dialogue with someone who holds irrational belief X. Instead of simply laying out your argument, you ask the person whether he agrees with Proposition Y, where Proposition Y seems pretty obvious and indisputable. Our rational person might quickly and easily agree or disagree with Y. Whereas our irrational person needs to think more carefully about Y; decide whether it might undermine his position; and if it does, construct a rationalization for rejecting Y. This difference in thinking might be measured in terms of reaction times.

comment by Lumifer · 2014-03-04T20:59:58.962Z · LW(p) · GW(p)

[ha-ha-only-serious](http://www.catb.org/jargon/html/H/ha-ha-only-serious.html)

Rationality is commonly defined as winning. Therefore rationality testing is easy -- just check if the subject is a winner or a loser.

comment by Viliam_Bur · 2014-03-04T10:53:47.575Z · LW(p) · GW(p)

Okay, I think there is a decent probability that you are right, but at this moment we need more data, which we will get by trying to create different kinds of rationality tests.

A possible outcome is that we won't get true rationality tests, but at least something partially useful, e.g. tests selecting the people capable of rational though, which includes a lot of irrational people, but still not everyone. Which may still appear to be just another form of intelligence tests (a sufficiently intelligent irrational person is able to make rational bets, and still believe they have an invisible dragon in the garage).

So... perhaps this is a moment where I should make a bet about my beliefs. Assuming that Stanovich does not give up, and other people will follow him (that is, assuming that enough psychologists will even try to create rationality tests), I'd guess... probability 20% within 5 years, 40% within 10 years, 80% ever (pre-Singularity) that there will be a test which predicts rationality significantly better than an IQ test. Not completely reliably, but sufficiently that you would want your employees to be tested by that test instead of an IQ test, even if you had to pay more for it. (Which doesn't mean that employers actually will want to use it. Or will be legally allowed to.) And probability 10% within 10 years, 60% ever that a true "rationality test" will be invented, at least for values up to 130 (which still many compartmentalizing people will pass). These numbers are just a wild guess, tomorrow I would probably give different values; I just thought it would be proper to express my beliefs in this format, because it encourages rationality in general.

Replies from: brazil84
comment by brazil84 · 2014-03-04T12:41:40.806Z · LW(p) · GW(p)

Which may still appear to be just another form of intelligence tests (

Yes, I have a feeling that "capability of rationality" would be highly correlated with IQ.

Not completely reliably, but sufficiently that you would want your employees to be tested by that test instead of an IQ test

Your mention of employees raises another issue, which is who the test would be aimed at. When we first started discussing the issue, I had an (admittedly vague) idea in my head that the test could be for aspiring rationalists. i.e. that it could it be used to bust irrational lesswrong posters who are far less rational than they realize. It's arguably more of a challenge to come up with a test to smoke out the self-proclaimed paragon of rationality who has the advantage of careful study and who knows exactly what he is being tested for.

By analogy, consider the Crown-Marlow Social Desirability Scale, which has been described as a test which measures "the respondent's desire to exaggerate his own moral excellence and to present a socially desirable facade" Here is a sample question from the test:

  1. T F I have never intensely disliked anyone

Probably the test works pretty well for your typical Joe or Jane Sixpack. But someone who is intelligent; who has studied up in this area; and who knows what's being tested will surely conceal his desire to exaggerate his moral excellence.

That said, having thought about it, I do think there is a decent chance that solid rationality tests will be developed. At least for subjects who are unprepared. One possibility is to measure reaction times as with "Project Implicit." Perhaps self-deception is more congnitively demanding than self-honesty and therefore a clever test might measure it. But you still might run into the problem of subconscious cheating.

Replies from: Nornagest, Viliam_Bur
comment by Nornagest · 2014-03-06T23:57:05.783Z · LW(p) · GW(p)

Perhaps self-deception is more congnitively demanding than self-honesty and therefore a clever test might measure it.

If anything, I might expect the opposite to be true in this context. Neurotypical people have fast and frugal conformity heuristics to fall back on, while self-honestly on a lot of questions would probably take some reflection; at least, that's true for questions that require aggregating information or assessing personality characteristics rather than coming up with a single example of something.

It'd definitely be interesting to hook someone up to a polygraph or EEG and have them take the Crowne-Marlowe Scale, though.

Replies from: brazil84
comment by brazil84 · 2014-03-07T06:30:22.846Z · LW(p) · GW(p)

If anything, I might expect the opposite to be true in this context.

Well consider the hypothetical I proposed:

suppose you are having a Socratic dialogue with someone who holds irrational belief X. Instead of simply laying out your argument, you ask the person whether he agrees with Proposition Y, where Proposition Y seems pretty obvious and indisputable. Our rational person might quickly and easily agree or disagree with Y. Whereas our irrational person needs to think more carefully about Y; decide whether it might undermine his position; and if it does, construct a rationalization for rejecting Y. This difference in thinking might be measured in terms of reaction times.

See what I mean?

I do agree that in other contexts, self-deception might require less thought. e.g. spouting off the socially preferable answer to a question without really thinking about what the correct answer is.

It'd definitely be interesting to hook someone up to a polygraph or EEG and have them take the Crowne-Marlowe Scale, though.

Yes.

comment by Viliam_Bur · 2014-03-04T16:47:22.838Z · LW(p) · GW(p)

That sample question reminds me of a "lie score", which is a hidden part of some personality tests. Among the serious questions, there are also some questions like this, where you are almost certain that the "nice" answer is a lie. Most people will lie on one or two of ten such question, but the rule of thumb is that if they lie in five or more, you just throw the questionnaire away and declare them a cheater. -- However, if they didn't lie on any of these question, you do a background check whether they have studied psychology. And you keep in mind that the test score may be manipulated.

Okay, I admit that this problem would be much worse for rationality tests, because if you want a person with given personality, they most likely didn't study psychology. But if CFAR or similar organizations become very popular, then many candidates for highly rational people will be "tainted" by the explicit study of rationality, simply because studying rationality explicitly is probably a rational thing to do (this is just an assumption), but it's also what an irrational person self-identifying as a rationalist would do. Also, practicing for IQ tests is obvious cheating, but practicing for getting better at doing rational tasks is the rational thing to do, and a wannabe rationalist would do it, too.

Well, seems like the rationality tests would be more similar to IQ tests than to personality test. Puzzles, time limits... maybe even the reaction times or lie detectors.

Replies from: PeterDonis, brazil84
comment by PeterDonis · 2014-03-06T23:43:06.969Z · LW(p) · GW(p)

Among the serious questions, there are also some questions like this, where you are almost certain that the "nice" answer is a lie.

On the Crowne-Marlowe scale, it looks to me (having found a copy online and taken it) like most of the questions are of this form. When I answered all of the questions honestly, I scored 6, which according to the test, indicates that I am "more willing than most people to respond to tests truthfully"; but what it indicates to me is that, for all but 6 out of 33 questions, the "nice" answer was a lie, at least for me.

The 6 questions were the ones where the answer I gave was, according to the test, the "nice" one, but just happened to be the truth in my case: for example, one of the 6 was "T F I like to gossip at times"; I answered "F", which is the "nice" answer according to the test--presumably on the assumption that most people do like to gossip but don't want to admit it--but I genuinely don't like to gossip at all, and can't stand talking to people who do. Of course, now you have the problem of deciding whether that statement is true or not. :-)

Could a rationality test be gamed by lying? I think that possibility is inevitable for a test where all you can do is ask the subject questions; you always have the issue of how to know they are answering honestly.

comment by brazil84 · 2014-03-04T20:28:37.156Z · LW(p) · GW(p)

Well, seems like the rationality tests would be more similar to IQ tests than to personality test. Puzzles, time limits... maybe even the reaction times or lie detectors.

Yes, reaction times seem like an interesting possibility. There is an online test for racism which uses this principle. But it would be pretty easy to beat the test if the results counted for anything. Actually lie detectors can be beaten too.

Perhaps brain imaging will eventually advance to the point where you can cheaply and accurately determine if someone is engaged in deception or self-deception :)

comment by orbenn · 2014-03-01T16:59:07.073Z · LW(p) · GW(p)

"rationality" branding isn't as good for keeping that front and center, especially compared to, say the effective altruism meme

Perhaps a better branding would be "effective decision making", or "effective thought"?

As I've already explained, there's a difficult problem here about how to be appropriately modest about our own rationality. When I say something, I never think it's stupid, otherwise I wouldn't say it. But at least I'm not so arrogant as to go around demanding other people acknowledge my highly advanced rationality. I don't demand that they accept "Chris isn't saying anything stupid" as an axiom in order to engage with me.

I think this is the core of what you are disliking. Almost all of my reading on LW is in the Sequences rather than the discussion areas, so I haven't been placed to notice anyone's arrogance. But I'm a little sadly surprised by your experience because for me, the result of reading the sequences has been to have less trust that my own level of sanity is high. I'm significantly less certain of my correctness in any argument.

We know that knowing about biases doesn't remove them, so instead of increasing our estimate of our own rationality, it should correct our estimate downwards. This shouldn't even require pride as an expense since we're also adjusting our estimates of everyone else's sanity down a similar amount. As a check to see if we're doing things right, the result should be less time spent arguing and more time spent thinking about how we might be wrong and how to check our answers. Basically it should remind us to use type 2 thinking more whenever possible, and to seek effectiveness training for our type 1 thinking whenever available.

comment by 7EE1D988 · 2014-03-01T11:03:17.669Z · LW(p) · GW(p)

Or, as you might say, "Of course I think my opinions are right and other people's are wrong. Otherwise I'd change my mind." Similarly, when we think about disagreement, it seems like we're forced to say, "Of course I think my opinions are rational and other people's are irrational. Otherwise I'd change my mind."

I couldn't agree more to that - to a first approximation.

Now of course, the first problem is with people who think a person is either rational in general or not, right in general, or not. Being right or rational is conflated with intelligence, for people can't seem to imagine that a cognitive engine which output so many right ideas in the past could be anything but a cognitive engine which outputs right ideas in general.

For instance and in practice, I'm pretty sure I strongly disagree with some of your opinions. Yet I agree with this bit over there, and other bits as well. Isn't it baffling how some people can be so clever, so right about a huge bundle of things (read : how they have opinion so very much like mine), and then suddenly you find they believe X, where X seems incredibly stupid and wrong for obvious reasons to you.

I posit that people want to find others like them (in a continuum with finding a community of people like them, some place where they can belong), and it stings to realize that even people who hold many similar opinions still aren't carbon copies of you, that their cognitive engine doesn't work exactly the same way as yours, and that you'll have to either change yourself, or change others (both of which can be hard, unpleasant work), if you want there to be less friction between you (unless you agree to disagree, of course).

Problem number two is simply that whether you think yourself right about a certain problem, have thought about it for a long time before coming to your own conclusion, doesn't preclude new, original information, or intelligent arguments to sway your opinion. I'm often pretty darn certain about my beliefs (those I care about anyway, that is, usually the instrumental beliefs and methods I need to attain my goals) but I know better than not to change my opinion or belief for a topic about which I care if I'm conclusively shown to be wrong (but that should go without saying in a rationalist community).

Replies from: elharo, brilee
comment by elharo · 2014-03-01T19:46:36.804Z · LW(p) · GW(p)

Rationality, intelligence, and even evidence are not sufficient to resolve all differences. Sometimes differences are a deep matter of values and preferences. Trivially, I may prefer chocolate and you prefer vanilla. There's no rational basis for disagreement, nor for resolving such a dispute. We simply each like what we like.

Less trivially, some people take private property as a fundamental moral right. Some people treat private property as theft. And a lot of folks in the middle treat it as a means to an end. Folks in the middle can usefully dispute the facts and logic of whether particular incarnations of private property do or do not serve other ends and values, such as general happiness and well-being. However perfectly rational and intelligent people who have different fundamental values with respect to private property are not going to agree, even when they agree on all arguments and points of evidence.

There are many other examples where core values come into play. How and why people develop and have different core values than other people is an interesting question. However even if we can eliminate all partisan-shaded argumentation, we will not eliminate all disagreements.

comment by brilee · 2014-03-01T14:23:33.665Z · LW(p) · GW(p)

'''I posit that people want to find others like them (in a continuum with finding a community of people like them, some place where they can belong), and it stings to realize that even people who hold many similar opinions still aren't carbon copies of you, that their cognitive engine doesn't work exactly the same way as yours, and that you'll have to either change yourself, or change others (both of which can be hard, unpleasant work), if you want there to be less friction between you (unless you agree to disagree, of course).'''

Well said.

comment by Sniffnoy · 2014-03-01T22:06:03.644Z · LW(p) · GW(p)

For example, people say "I have a poor mental model of..." when they could have just said they don't understand it very well.

That... isn't jargon? There are probably plenty of actual examples you could have used here, but that isn't one.

Edit: OK, you did give an actual example below that ("blue-green politics"). Nonetheless, "mental model" is not jargon. It wasn't coined here, it doesn't have some specialized meaning here that differs from its use outside, it's entirely compositional and thus transparent -- nobody has to explain to you what it means -- and at least in my own experience it just isn't a rare phrase in the first place.

Replies from: Jiro
comment by Jiro · 2014-03-02T07:49:12.762Z · LW(p) · GW(p)

it doesn't have some specialized meaning here that differs from its use outside

It doesn't have a use outside.

I measn, yeah, literally, the words do mean the same thing and you could find someone outside lesswrong who says it, but it's an unnecessarily complicated way to say things that generally is not used. It takes more mental effort to understand, it's outside most people's expectations for everyday speech, and it may as well be jargon, even if technically it isn't. Go ahead, go down the street and the next time you ask someone for directions and they tell you something you can't understand, reply "I have a poor mental model of how to get to my destination". They will probably look at you like you're insane.

Replies from: VAuroch
comment by VAuroch · 2014-03-02T09:47:18.304Z · LW(p) · GW(p)

"Outside" doesn't have to include a random guy on the street. Cognitive science as a field is "outside", and uses "mental model".

Also, "I have a poor mental model of how to get to my destination" is, descriptively speaking, wrong usage of 'poor mental model'; it's inconsistent with the connotations of the phrase, which connotes an attempted understanding which is wrong. I don't "have a poor mental model" of the study of anthropology; I just don't know anything about it or have any motivation to learn. I do "have a poor mental model" of religious believers; my best attempts to place myself in the frame of reference of a believer do not explain their true behavior, so I know that my model is poor.

Replies from: Jiro
comment by Jiro · 2014-03-02T16:09:11.590Z · LW(p) · GW(p)

it's inconsistent with the connotations of the phrase, which connotes an attempted understanding which is wrong

I suggested saying it in response to being given directions you don't understand. If so, then you did indeed attempt to understand and couldn't figure it out.

"Outside" doesn't have to include a random guy on the street.

But there's a gradation. Some phrases are used only by LWers. Some phrases are used by a slightly wider range of people, some by a slightly wider than that. Whether a phrase is jargon-like isn't a yes/no thing; using a phrase which is used by cognitive scientists but which would not be understood by the man on the street, when there is another way of saying the same thing that would be understood by the man on the street, is most of the way towards being jargon, even if technically it's not because cognitive scientists count as an outside group.

Furthermore, just because cognitive scientists know the phrase doesn't mean they use it in conversation about subjects that are not cognitive science. I suspect that even cognitive scientists would, when asking each other for directions, not reply to incomprehensible directions by saying they have a poor mental model, unless they are making a joke or unless they are a character from the Big Bang Theory (and the Big Bang Theory is funny because most people don't talk like that, and the few who do are considered socially inept.)

comment by hairyfigment · 2014-03-01T19:19:44.150Z · LW(p) · GW(p)

Yet I've heard people suggest that you must never be dismissive of things said by smart people, or that the purportedly high IQ of the LessWrong community means people here don't make bad arguments.

When? The closest case I can recall came from someone defending religion or theology - which brought roughly the response you'd expect - and even that was a weaker claim.

If you mean people saying you should try to slightly adjust your probabilities upon meeting intelligent and somewhat rational disagreement, this seems clearly true. Worst case scenario, you waste some time putting a refutation together (coughWLC).

comment by [deleted] · 2015-08-19T11:26:39.244Z · LW(p) · GW(p)

I hadn't come across the Principle of Charity elsewhere. Thanks for your insights.

comment by Vaniver · 2014-03-02T10:06:31.630Z · LW(p) · GW(p)

I once had a member of the LessWrong community actually tell me, "You need to interpret me more charitably, because you know I'm sane." "Actually, buddy, I don't know that," I wanted to reply—but didn't, because that would've been rude.

So, respond with something like "I don't think sanity is a single personal variable which extends to all held beliefs." It conveys the same information- "I don't trust conclusions solely because you reached them"- but it doesn't convey the implication that this is a personal failing on their part.

I've said this before when you've brought up the principle of charity, but I think it bears repeating. The primary benefit of the principle of charity is to help you, the person using it, and you seem to be talking mostly about how it affects discourse, and that you don't like it when other people expect that you'll use the principle of charity when reading them. I agree with you that they shouldn't expect that- but I find it more likely that this is a few isolated incidents (and I can visualize a few examples) than that this is a general tendency.

comment by devas · 2014-03-01T13:29:45.824Z · LW(p) · GW(p)

I am surprised by the fact that this post has so little karma. Since one of the...let's call them "tenets" of the rationalism community is the drive to improve one's own self, I would have imagined that this kind of criticism would have been welcomed.

Can anyone explain this to me, please? :-/

Replies from: TheOtherDave
comment by TheOtherDave · 2014-03-01T16:53:10.631Z · LW(p) · GW(p)

I'm not sure what the number you were seeing when you wrote this was, and for my own part I didn't upvote it because I found it lacked enough focus to retain my interest, but now I'm curious: how much karma would you expect a welcomed post to have received between the "08:52AM" and "01:29:45PM" timestamps?

Replies from: devas
comment by devas · 2014-03-02T12:08:33.222Z · LW(p) · GW(p)

I actually hadn't considered the time; in retrospect, though, it does make a lot of sense. Thank you! :-)

comment by cousin_it · 2014-03-01T09:29:25.747Z · LW(p) · GW(p)

Just curious, how does Plantinga's argument prove that pigs fly? I only know how it proves that the perfect cheeseburger exists...

Replies from: ChrisHallquist, Alejandro1
comment by ChrisHallquist · 2014-03-03T08:05:06.152Z · LW(p) · GW(p)

Plantinga's argument defines God as a necessary being, and assumes it's possible that God exists. From this, and the S5 axioms of modal logic, it folllws that God exists. But you can just as well argue, "It's possible the Goldbach Conjecture is true, and mathematical truths are if true necessarily true, therefore the Goldbach Conjecture is true." Or even "Possibly it's a necessary truth that pigs fly, therefore pigs fly."

(This is as much as I can explain without trying to give a lesson in modal logic, which I'm not confident in my ability to do.)

Replies from: cousin_it
comment by cousin_it · 2014-03-03T10:18:20.197Z · LW(p) · GW(p)

"Possibly it's a necessary truth that pigs fly, therefore pigs fly."

That's nice, thanks!

comment by Alejandro1 · 2014-03-01T16:06:06.492Z · LW(p) · GW(p)

Copying the description of the argument from the Stanford Encyclopedia of Philosophy, with just one bolded replacement of a definition irrelevant to the formal validity of the argument:

Say that an entity possesses “maximal excellence” if and only if it is a flying pig. Say, further, that an entity possesses “maximal greatness” if and only if it possesses maximal excellence in every possible world—that is, if and only if it is necessarily existent and necessarily maximally excellent. Then consider the following argument:

  • There is a possible world in which there is an entity which possesses maximal greatness.

  • (Hence) There is an entity which possesses maximal greatness.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-03-01T16:49:27.539Z · LW(p) · GW(p)

This argument proves that at least one pig can fly. I understand "pigs fly" to mean something more like "for all X, if X is a typical pig, X can fly."

Replies from: Alejandro1
comment by Alejandro1 · 2014-03-01T17:28:02.683Z · LW(p) · GW(p)

You are right. Perhaps the argument could be modified by replacing "is a flying pig" by "is a typical pig in all respects, and flies"?

Replies from: TheOtherDave
comment by TheOtherDave · 2014-03-01T21:24:25.929Z · LW(p) · GW(p)

Perhaps. It's not clear to me that this is irrelevant to the formal validity of the argument, since "is a typical pig in all respects, and flies" seems to be a contradiction, and replacing a term in an argument with a contradiction isn't necessarily truth-preserving. But perhaps it is, I don't know... common sense would reject it, but we're clearly not operating in the realms of common sense here.

comment by TheAncientGeek · 2014-04-24T09:06:35.827Z · LW(p) · GW(p)

If you persistently misinterpret people as saying stupid things, then your evidence that people say a lot of stupid things is false evidence, You're in a sort of echo chamber. The PoC is correct because an actually stupid comment is a comment that can't be interpreted as smart no matter how hard you try.

The fact that some people misapply the POC is not the PoCs fault..

The PoC cannot in any way a guideline about what is worth spending time on. It is about efficient communication in the sense of interpreting people correctly only. If you haven't got time to charitable interpret someone, you should default to some average or commital appraisal of their intelligence, rather than accumulate false data that they are stupid.