Rationalization
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-09-30T19:29:15.000Z · LW · GW · Legacy · 29 commentsContents
30 comments
In “The Bottom Line,” I presented the dilemma of two boxes, only one of which contains a diamond, with various signs and portents as evidence. I dichotomized the curious inquirer and the clever arguer. The curious inquirer writes down all the signs and portents, and processes them, and finally writes down, “Therefore, I estimate an 85% probability that box B contains the diamond.” The clever arguer works for the highest bidder, and begins by writing, “Therefore, box B contains the diamond,” and then selects favorable signs and portents to list on the lines above.
The first procedure is rationality. The second procedure is generally known as “rationalization.”
“Rationalization.” What a curious term. I would call it a wrong word. You cannot “rationalize” what is not already rational. It is as if “lying” were called “truthization.”
On a purely computational level, there is a rather large difference between:
- Starting from evidence, and then crunching probability flows, in order to output a probable conclusion. (Writing down all the signs and portents, and then flowing forward to a probability on the bottom line which depends on those signs and portents.)
- Starting from a conclusion, and then crunching probability flows, in order to output evidence apparently favoring that conclusion. (Writing down the bottom line, and then flowing backward to select signs and portents for presentation on the lines above.)
What fool devised such confusingly similar words, “rationality” and “rationalization,” to describe such extraordinarily different mental processes? I would prefer terms that made the algorithmic difference obvious, like “rationality” versus “giant sucking cognitive black hole.”
Not every change is an improvement, but every improvement is necessarily a change. You cannot obtain more truth for a fixed proposition by arguing it; you can make more people believe it, but you cannot make it more true. To improve our beliefs, we must necessarily change our beliefs. Rationality is the operation that we use to obtain more accuracy for our beliefs by changing them. Rationalization operates to fix beliefs in place; it would be better named “anti-rationality,” both for its pragmatic results and for its reversed algorithm.
“Rationality” is the forward flow that gathers evidence, weighs it, and outputs a conclusion. The curious inquirer used a forward-flow algorithm: first gathering the evidence, writing down a list of all visible signs and portents, which they then processed forward to obtain a previously unknown probability for the box containing the diamond. During the entire time that the rationality-process was running forward, the curious inquirer did not yet know their destination, which was why they were curious. In the Way of Bayes, the prior probability equals the expected posterior probability: If you know your destination, you are already there.
“Rationalization” is a backward flow from conclusion to selected evidence. First you write down the bottom line, which is known and fixed; the purpose of your processing is to find out which arguments you should write down on the lines above. This, not the bottom line, is the variable unknown to the running process.
I fear that Traditional Rationality does not properly sensitize its users to the difference between forward flow and backward flow. In Traditional Rationality, there is nothing wrong with the scientist who arrives at a pet hypothesis and then sets out to find an experiment that proves it. A Traditional Rationalist would look at this approvingly, and say, “This pride is the engine that drives Science forward.” Well, it is the engine that drives Science forward. It is easier to find a prosecutor and defender biased in opposite directions, than to find a single unbiased human.
But just because everyone does something, doesn’t make it okay. It would be better yet if the scientist, arriving at a pet hypothesis, set out to test that hypothesis for the sake of curiosity—creating experiments that would drive their own beliefs in an unknown direction.
If you genuinely don’t know where you are going, you will probably feel quite curious about it. Curiosity is the first virtue, without which your questioning will be purposeless and your skills without direction.
Feel the flow of the Force, and make sure it isn’t flowing backwards.
29 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by Robin_Hanson2 · 2007-09-30T20:10:17.000Z · LW(p) · GW(p)
Sadly, I almost always surprise economics graduate students looking for topics to research when I ask them; "What question, where you do not know the answer, would you most like to answer?"
comment by Naadir_Jeewa · 2007-09-30T20:47:43.000Z · LW(p) · GW(p)
How would this relate to shock Bruno Latour's conceptualization of Actor-Network-Theory, where the sociologist simply tries to maximise the number of sources of uncertainty in a set of trials, without resorting to a "explanatory social theory"?
comment by Adirian · 2007-09-30T21:05:18.000Z · LW(p) · GW(p)
I find the linguistic distinction to be better than you relate - to rationalize something is to start with something that isn't rational. (As if it were rational, it wouldn't need to be rationalized - it's already there.)
That being said, rationalization in action isn't always bad, because we don't always have conscious understanding of the algorithm used to produce our conclusions. This would be like, to use your example, Einstein coming to the conclusion of relativity - and then attempting to understand how he got there. Rationalization in this case is a useful tool, as it is, in effect, an attempt to obtain the variables that originally went into the algorithm, perhaps to examine their validity.
If you already understand how you got to a conclusion which you are then attempting to bolster - if the evidence that is filtering evidence is being ignored - then it is precisely as bad as you say.
comment by pdf23ds · 2007-09-30T22:04:01.000Z · LW(p) · GW(p)
Of course, in an etymological sense, "rationalization" doesn't seem so odd. "Reason" means both logic and motivation. Those two concepts are conflated in the word and related words, and "rationalization" is simply formed from "rationale". (Actual etymologists, or users of Google, may feel free to correct me.)
comment by Vladimir_Nesov · 2007-10-01T13:20:33.000Z · LW(p) · GW(p)
I agree with Adirian. Rationalization is a process of rational-explanation-seeking. It starts from statement that was obtained by non-rational process (as when you overheard something, or intuitively guessed something) and then creates a rational explanation according to one's concept of rationality, concurrently adjusting statement if necessary. So normal rationalization does change the conclusion: it can change its status from 'suspicious statement' to 'belief', or it can adjust it to be consistent with facts. Now biased rationalization uses 'biased rationality' according to which it builds explanation, for example that 'clever arguer' applies selection bias.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-01T15:36:24.000Z · LW(p) · GW(p)
It starts from statement that was obtained by non-rational process (as when you overheard something, or intuitively guessed something)
An intuitive guess is non-scientific but not non-rational.
comment by Doug_S. · 2007-10-01T17:16:11.000Z · LW(p) · GW(p)
Random comment:
Many years ago, there were a series of articles written by the pseudonym Archibald Putt, collectively referred to as "Putt's Laws", that appeared in Research/Development magazine. One law is relevant to the topic at hand.
"Decisions are justified by benefits to the organization; they are made by considering benefits to the decisionmakers."
If it is easier to lie convincingly when you believe the lie, then rationalization makes perfect sense. One makes a decision based on selfish, primarily unconscious motives, and then comes up with a semi-convincing rationalization for public consumption. "I stole that because I deserved it" would be a classic example of this kind of justification.
comment by Vladimir_Nesov2 · 2007-10-01T17:37:04.000Z · LW(p) · GW(p)
Eliezer: An intuitive guess is non-scientific but not non-rational
It doesn't affect my point; but do you argue that intuitive reasoning can be made free of bias?
Replies from: None↑ comment by [deleted] · 2013-01-02T21:13:37.344Z · LW(p) · GW(p)
An intuitive guess can be made without biasing the result (accept or reject), so long as one does not privilege the hypothesis.
comment by Vag · 2010-07-01T08:10:32.642Z · LW(p) · GW(p)
Your wonderful essay contains a flaw.
"box B contains the diamond"
There is no way in reality to check correctness of reasoning result "directly" (unable to "open box and see if it contains brilliant"). But, if result of reasoning is not directly influences the reasoner, it is also unfeasible .
So, correct story is: "one of two melted unopenable boxes contains bomb with timer. The task is select one box and throw in deep well, or else it shall explode and mutilate the reasoner"
comment by MoreOn · 2010-12-26T07:32:51.413Z · LW(p) · GW(p)
Try answering this without any rationalization:
In my middle school science lab, a thermometer showed me that water boiled at 99.5 degrees C and not 100. Why?
Replies from: datadataeverywhere, ksvanhorn, Torvaun, Desrtopa, jslocum↑ comment by datadataeverywhere · 2010-12-26T07:51:14.825Z · LW(p) · GW(p)
I suspect you have a point that I'm missing.
My take is: either the reading was wrong (experimental error of some kind), or it wasn't wrong. If it wasn't wrong, then your water was boiling at a 99.5 degrees. There are a number of plausible explanations for the latter; the one that I assign the highest prior to is that you were at an elevation higher than sea level.
So, my answer is in the form of a probability distribution. Give me more evidence, and I will refine it, or demand and answer now, and I will tell you "altitude", my current most plausible candidate (experimental error is my second candidate, first with how (where in the water) you measured, then with the quality of the thermometer. After that trails things like impurities in the water).
↑ comment by Torvaun · 2011-02-12T16:12:07.400Z · LW(p) · GW(p)
My experience leads me to assume that the thermometer was mismarked. My high school chemistry teacher drilled into us that the thermometers we had were all precise, but of varying accuracy. A thermometer might say that water boils at 99.5 C, but if it did, it would also say that it froze at -0.5 C. Again, there are conditions that actually change the temperature at which water boils, so it's possible you were at a lower atmospheric pressure or that the water was contaminated. But, given that we have a grand total of one data point, I can't narrow it down to a single answer.
Replies from: MoreOn↑ comment by MoreOn · 2011-02-18T01:24:48.190Z · LW(p) · GW(p)
But, given that we have a grand total of one data point, I can't narrow it down to a single answer.
Exactly!
Given just one data point, every explanation for why we didn't observe water boiling at 100 degrees C is an excuse for why it should have. To honestly answer this question, we would have to have performed additional experiments.
But we had already had a conclusion we were supposed to have reached--a truth by definition, in our case. Reaching that conclusion in our imperfect circumstances required rationalization.
Replies from: Torvaun↑ comment by Torvaun · 2011-02-21T14:17:58.925Z · LW(p) · GW(p)
Uh, no. Pressure affects boiling point. If you're at a different pressure, it should not boil at 100 degrees C. If your water is contaminated by, say, alcohol, the boiling point will change. We aren't trying to explain away datapoints, we're using them to build a system that's larger than "Water boils at 100 degrees Centigrade." Just adding "at standard temperature and pressure," to the end of that gives a wider range of predictable and falsifiable results.
What we're doing is rationality, not rationalization.
↑ comment by jslocum · 2011-03-20T17:38:11.590Z · LW(p) · GW(p)
You've missed a key point, which is that rationalization refers to a process in which one of many possible hypothesis is arbitrarily selected, which the rationalizer then attempts to support using a fabricated argument. In your query, you are asking that a piece of data be explained. In the first case, one filters the evidence, rejecting any data that too strongly opposes a pre-selected hypothesis. In the second case, one generates a space of hypothesis that all fit the data, and selects the most likely one as a guess. The difference is between choosing data to fit a hypothesis, and finding a hypothesis that best fits the data. Rationalization is pointing to a blank spot on your map and saying, "There must be a lake somewhere around there, because there aren't any other lakes nearby," while ignoring the fact that it's hot and there's sand everywhere.
comment by [deleted] · 2012-11-05T19:48:48.366Z · LW(p) · GW(p)
Not every change is an improvement, but every improvement is necessarily a change. You cannot obtain more truth for a fixed proposition by arguing it; you can make more people believe it, but you cannot make it more true. To improve our beliefs, we must necessarily change our beliefs.
I know this of course, but the way you state it here really drives the point home. Well written.
comment by [deleted] · 2014-05-28T18:04:25.861Z · LW(p) · GW(p)
Apparently, this sense of the word "rationalize" only dates from 1922.
comment by TheAncientGeek · 2014-05-28T19:57:41.709Z · LW(p) · GW(p)
If rationality were able to select hypotheses from an infinte space of hypotheses, your distinction would be accurate. . Theoretical AIXI works that way, kind of, but nothing made of atoms can implement it. Rationality picks from the N hypotheses that have occurred to the thinker, and rationalization is the degenerate case where N=1.
comment by epicurus · 2015-03-03T15:25:03.576Z · LW(p) · GW(p)
According to this article, one can predict a decision 7 seconds before it is actually made. Doesn't this, in some sense, mean that a large amount of our thought process(certainly those 7 seconds) are actually rationalizing a decision we have already made?
Is my thinking off or is this one more thing to actively guard against and realize when we are letting our unconscious decide for us?
comment by Yoav Ravid · 2019-01-07T07:39:31.812Z · LW(p) · GW(p)
in Hebrew there's a synonym for rationalization that stems from the word "excuse" ("הַתְרָצָה"). i think it's quite fitting, as that's basically the process. you decide on a conclusion and excuse you way backward from it so it seems rational.
what do you think?
I'm not very good in English so I'm not sure, if we create a word for it that stems from excuse, what it would be - have any suggestions?
Replies from: Ruby↑ comment by Ruby · 2019-01-07T08:35:58.449Z · LW(p) · GW(p)
I didn't know that was the word for excuse, but I think it's an excellent word itself to use for rationalization. No synonym required. ״רצה״ is the root for "want" and "הַתְרָצָה" is the the reflexive conjugation, so it's approximately "self-wanting." Which is exactly what rationalization is - reasoning towards what you want to be true.
Replies from: Yoav Ravid↑ comment by Yoav Ravid · 2019-01-07T10:19:27.769Z · LW(p) · GW(p)
sorry, i made a communication error. "הַתְרָצָה" is the other word for rationalization in Hebrew, it stems from the word for excuse which is "תירוץ".
Replies from: Rubycomment by VeryPeeved · 2024-08-22T16:36:25.049Z · LW(p) · GW(p)
Calling it "Rationalization" is just another instance of a proud tradition of referring to antonyms by almost identical words (hypothermia vs hyperthermia) for some fucking reason.