Rationalists should beware rationalism
post by Kaj_Sotala · 2009-04-06T14:16:30.733Z · LW · GW · Legacy · 32 commentsContents
32 comments
Rationalism is most often characterized as an epistemological position. On this view, to be a rationalist requires at least one of the following: (1) a privileging of reason and intuition over sensation and experience, (2) regarding all or most ideas as innate rather than adventitious, (3) an emphasis on certain rather than merely probable knowledge as the goal of enquiry. -- The Stanford Encyclopedia of Philosophy on Continental Rationalism.
By now, there are some things which most Less Wrong readers will agree on. One of them is that beliefs must be fueled by evidence gathered from the environment. A belief must correlate with reality, and an important part of that is whether or not it can be tested - if a belief produces no anticipation of experience, it is nearly worthless. We can never try to confirm a theory, only test it.
But yet, we seem to have no problem coming up with theories that are either untestable or that we have no intention of testing, such as evolutionary psychological explanations for the underdog effect.
I'm being a bit unfair here. Those posts were well thought out and reasonably argued, and Roko's post actually made testable predictions. Yvain even made a good try at solving the puzzle, and when he couldn't, he reasonably concluded that he was stumped and asked for help. That sounds like a proper use of humility to me.
But the way that ev-psych explanations get rapidly manufactured and carelessly flung around on OB and LW has always been a bit of a pet peeve for me, as that's exactly how bad evpsych gets done. The best evolutionary psychology takes biological and evolutionary facts, applies those to humans and then makes testable predictions, which it goes on to verify. It doesn't take existing behaviors and then try to come up with some nice-sounding rationalization for them, blind to whether or not the rationalization can be tested. Not every behavior needs to have an evolutionary explanation - it could have evolved via genetic drift, or be a pure side-effect from some actual adaptation. If we set out by trying to find an evolutionary reason for some behavior, we are assuming from the start that there must be one, when it isn't a given that there is. And even a good theory need not explain every observation.
Obviously I'm not saying that we should never come up with such theories. Be wary of those who speak of being open-minded and modestly confess their ignorance. But we should avoid giving them excess weight, and instead assign them very broad confidence intervals. This seems to contradict the claim that the human mind is well adapted to its EEA. Is evolutionary psychology wrong? Maybe the creationists are correct after all writes Roko, implying that it is crucial for us to come up with an explanation (yes, I do know that this is probably just a dramatic exaggaration on Roko's part, but it made such a good example that I couldn't help but to use it). But regardless of whether or not we do come up with an explanation, that explanation doesn't carry much weight if it doesn't provide testable predictions. And even if it did provide such predictions, we'd need to find confirming evidence first, before lending it much credence.
I suspect that we rationalists may have a tendency towards rationalism, as in the meaning above. In order to learn how to think, we study math and probability theory. We consider different fallacies, and find out how to dismantle broken reasoning, both that of others and our own. We learn to downplay the role of our personal experiences, recognizing that those may be just the result of a random effect and a small sample size. But learning to think more like a mathematician, whose empiricism resides in the realm of pure thought, does not predispose us to more readily go collect evidence from the real world. Neither does the downplaying of our personal experiences. Many are computer science majors, used to being in the comfortable position of being capable of testing their hypotheses without needing to leave their office. It is, then, an easy temptation to come up with a nice-sounding theory which happens to explain the facts, and then consider the question solved. Reason must reign supreme, must it not?
But if we really do so, we are endangering our ability to find the truth in the future. Our existing preconceptions constrain part of our creativity, and if we believe untested hypotheses too uncritically, the true ones may never even occur to us. If we believe in one falsehood, then everything that we build on top of it will also be flawed.
This isn't to say that all tests would necessarily have to involve going out of your room to dig for fossils. A hypothesis does get some validation from simply being compatible with existing knowledge - that's how they pass the initial "does this make sense" test in the first place. Certainly, a scholarly article citing several studies and theories in its support is already drawing on considerable supporting evidence. It often happens that a conclusion, built on top of previous knowledge, is so obvious that you don't even need to test it. Roko's post, while not yet in this category, drew on already established arguments relating to the Near-Far distinction and other things, and I do in fact find it rather plausible. Unless contradictory evidence comes in, I'll consider it the best explanation of the underdog phenomenon, one which I can build further hypotheses on. But I do keep in mind that none of its predictions have been tested yet, and that it might still be wrong.
It is therefore that I say: certainly do come up with all kinds of hypotheses, but if they haven't been tested, be careful not to believe in them too much.
32 comments
Comments sorted by top scores.
comment by RobinHanson · 2009-04-06T15:01:32.221Z · LW(p) · GW(p)
This seems a bit of a rant to me. You seem to have a pretty narrow concept of "test" in mind; claims can be "tested" via almost any connection they make with other claims that connect etc. to things we see. This is what intellectual exploration looks like. Wait a half-century if you want to see the winning claim defenses all neatly laid out in hypothesis-test format.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2009-04-06T17:03:13.615Z · LW(p) · GW(p)
I didn't intend to imply that a hypothesis couldn't be tested by its connections to established theories, but looking at my post now, it does sound a bit like I would. I edited it to make this clearer - see what is now the second-to-last paragraph.
comment by Scott Alexander (Yvain) · 2009-04-06T18:27:32.552Z · LW(p) · GW(p)
IAWYC, but the reason I was interested in evolutionary explanations was twofold: first, that a lot of standard (nonevolutionary) psychologists had attacked the problem and come up with what I considered unsatisfying explanations; and second, that this seemed like exactly the sort of area evolutionary psychology had been successful at explaining in the past (ie, a universal human tendency relating to strategies in conflicts and potentially having a large impact on future success).
I don't know what to think about the proposed solutions, including Roko's. On the one hand, they all sound pretty good, including the non-evolutionary ones. On the other hand, they all sound pretty good. Although it's always possible that there was more than one pressure driving people to support the underdog, five or six separate ones working simultaneously is a bit of a stretch. That means I probably have a low standard for "sounds pretty good". Which might be your point.
Still, I don't know what you want us to do. Are you just saying keep a low probability for all untested hypotheses? That sounds like a pretty good idea.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2009-04-06T18:52:59.775Z · LW(p) · GW(p)
Are you just saying keep a low probability for all untested hypotheses?
Pretty much.
Replies from: steven0461↑ comment by steven0461 · 2009-04-07T00:12:41.962Z · LW(p) · GW(p)
I'm not saying I can't make sense out of this in practice, but in theory, surely if a hypothesis is untested, then so is its negation.
comment by jimmy · 2009-04-06T19:34:09.462Z · LW(p) · GW(p)
Testing new predictions is great and all, and it may be a Dark Side thing to do to recklessly state beliefs that haven't had new predictions tested, but it's not necessary to come up with new predictions.
If you are good at estimating the algorithmic complexity of a theory, and it's significantly less than the data it explains, it's most likely right. The only reason new predictions are favored over old ones is that you can't cheat with a complex theory that over fits the data.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2009-04-06T19:38:36.853Z · LW(p) · GW(p)
True. But now you're talking about theories which make entirely the same predictions as old ones, and are simply more elegant or simple versions of the old theories. Those are a separate case, one that I wasn't talking about.
comment by AllanCrossman · 2009-04-06T18:39:59.675Z · LW(p) · GW(p)
We can never try to confirm a theory, only test it.
While I think I understand the point Eliezer makes in the link, it's still possible to try to confirm a theory. One may "fail" and disconfirm it, but still - you did something that would confirm it if it was true. So you tried to confirm it.
(Where confirm means "raise your confidence that it's true" rather than "prove absolutely certain".)
Replies from: anonym↑ comment by anonym · 2009-04-06T22:19:57.728Z · LW(p) · GW(p)
The word confirm, in the context of philosophy of science, usually means establish as true with absolute certainty. If that's the case, you would never try to confirm a theory, because you know it's not possible.
Replies from: AllanCrossman↑ comment by AllanCrossman · 2009-04-07T12:47:32.430Z · LW(p) · GW(p)
Well, I'm not sure "confirm" has to mean that.
Indeed, the Theorem's central insight - that a hypothesis is confirmed by any body of data that its truth renders probable - is the cornerstone of all subjectivist methodology.
-- Bayes' Theorem, Stanford Encyclopedia of Philosophy
Replies from: anonym↑ comment by anonym · 2009-04-07T18:43:43.293Z · LW(p) · GW(p)
I meant usually descriptively rather than proscriptively. Having just read the link for the first time though, confirmation in the traditional sense is a red herring, because Eliezer obviously doesn't mean confirm in the traditional sense. My apologies for side-tracking the discussion.
To get back to your original point though, what I take Eliezer to be saying in that linked post is that an experiment is necessarily just as much an attempt to disconfirm the theory as to confirm it. If, then, what you are actually doing is trying to "confirm or disconfirm the theory", and there's no way to set up an experiment that might confirm but couldn't disconfirm, then it's more accurate to say that you are trying to "test the theory".
comment by gwern · 2009-04-07T23:37:21.843Z · LW(p) · GW(p)
"It is therefore that I say: certainly do come up with all kinds of hypotheses, but if they haven't been tested, be careful not to believe in them too much."
This is good advice. I'm one of those who throws around EP-style explanations pretty casually. But I think there's an unrecognized use of this kind of casually rationalistic thinking: it exercises our minds to come up with a materialistic reason for all sorts of social observations. Even if we can't test our EP conjecture for why people support the underdog, just formulating and considering it inoculates us against thoughts like "It's just moral to like the underdog" or "Things're more interesting that way", or "That's just the way things are" (or even more obnoxious explanations like religious ones).
comment by agolubev · 2009-04-07T15:43:12.533Z · LW(p) · GW(p)
It may just be semantics and a look into the biological process of decision making. I think there've been studies to show that our body tends to react to the question of which card deck is better 2-3 times before our rational self declares so. It may just be the process of a body forming a "hypothesis" and then subjecting it to a scientific type test before creating and declaring a rational theory.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2009-04-07T15:59:52.606Z · LW(p) · GW(p)
One such study: Deciding Advantageously Before Knowing the Advantageous Strategy
comment by steven0461 · 2009-04-06T22:29:46.242Z · LW(p) · GW(p)
Successfully tested hypotheses are more likely than untested hypotheses, but testable hypotheses are not more likely than untestable hypotheses. A lot of people commit this mistake; your post does not, but it does sort of suggest it.
Replies from: Sideways↑ comment by Sideways · 2009-04-07T03:12:42.619Z · LW(p) · GW(p)
The rational way to establish the probability of a hypothesis is by testing it.
If a hypothesis is untestable in principle then its probability is zero, or undefined if you prefer. There's no way to assign any probability to it.
If it's impractical to test a hypothesis--e.g. if it would cost a trillion dollars to build a suitable particle accelerator--then the hypothesis stays in limbo until its proponents figure out a test to perform. At some point a probability can be assigned to it, but not yet.
Either way, if you're using "likeliness" to mean "probability" then it seems to me that testable hypotheses are "more likely" than untestable ones--insofar as we assign a probability to one and assign no probability to the other. If Bayes's Theorem keeps returning "undefined", you're doing it wrong.
Replies from: steven0461, janos↑ comment by steven0461 · 2009-04-07T16:31:06.415Z · LW(p) · GW(p)
Not all untestable-in-principle hypotheses are meaningless. And you can't refuse to assign a probability to a meaningful hypothesis; you can only pretend not to assign a probability, and then assign probabilities anyway each time you need to make a decision or answer a question that it's relevant to, and these probabilities will be different in different contexts without any reason why there should be a difference.
Replies from: Sideways↑ comment by Sideways · 2009-04-07T20:24:14.699Z · LW(p) · GW(p)
If I correctly understand the distinction you're making between "untestable" and "meaningless", then the hypothesis "God rewards Christians with Heaven and everyone else goes to Hell" is untestable but not meaningless, correct?
I don't bother to work Bayes' Theorem on untestable hypotheses, simply because there are an infinite number of untestable hypotheses and I don't have time to formally do math on them all. This is more or less equivalent to assigning them zero probability.
I stand by my claim that it's improper to say that an untestable hypothesis is "more likely" or "less likely" than a testable hypothesis, or another untestable one. Just because people are known to assign arbitrary probabilities to untestable hypotheses, doesn't make it a good or useful thing to do.
Replies from: steven0461↑ comment by steven0461 · 2009-04-08T13:47:53.808Z · LW(p) · GW(p)
If I correctly understand the distinction you're making between "untestable" and "meaningless", then the hypothesis "God rewards Christians with Heaven and everyone else goes to Hell" is untestable but not meaningless, correct?
Yes, that's right. But in the evpsych context almost all hypotheses are at least meaningful, so we're drifting off the issue.
If you were unsure of evpsych story X, and you found a way to test it, would your probability for X go up? It shouldn't, and that's all I'm saying. The possibility of future evidence is not evidence.
↑ comment by janos · 2009-04-07T14:59:25.756Z · LW(p) · GW(p)
Bayes' Theorem never returns "undefined". In the absence of any evidence it returns the prior.
Replies from: Sideways↑ comment by Sideways · 2009-04-07T20:04:47.220Z · LW(p) · GW(p)
Bayes' Theorem is undefined if p(X) is undefined.
Suppose our untestable-in-principle hypothesis is that undetectable dragons in your garage cause cancer. Then X is "undetectable garage dragon." As far as I can tell, there is no way to assign a probability to an undetectable dragon.
Please correct me if I'm wrong.
Replies from: steven0461, Annoyance↑ comment by steven0461 · 2009-04-07T20:19:06.853Z · LW(p) · GW(p)
As far as I can tell, there is no way to assign a probability to an undetectable dragon.
Solomonoff induction. Presumably you agree the probability is less than .1, and once you've granted that, we're "just haggling over the price".
↑ comment by Annoyance · 2009-04-07T20:09:38.185Z · LW(p) · GW(p)
What's wrong with zero? An indetectable something is redundant and can be eliminated without loss; it has no consequences that the negation of its existence doesn't also imply. You might as well treat it as impossible - if you don't like giving zero probabilities, assign it whatever value you use for things-that-can't-occur.
comment by Paul Crowley (ciphergoth) · 2009-04-06T15:48:21.529Z · LW(p) · GW(p)
Please use a less grandstandy title!
comment by agolubev · 2009-04-07T14:55:56.568Z · LW(p) · GW(p)
What is everyone's response to - "people aren't rational, they're rationalizers"? I'm very new to the blog, but figured this would be the perfect place to get some thoughts on this idea.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2009-04-07T15:06:33.356Z · LW(p) · GW(p)
"Ultimately this is true, but while reaching the 'rational' state may be impossible, one can always get a bit closer to it."
comment by billswift · 2009-04-06T15:12:22.270Z · LW(p) · GW(p)
I consider myself a rationalist and have an epistemic view of rationalism, but these three claims about what I am "required" to believe are bogus:
"(1) a privileging of reason and intuition over sensation and experience"; sensation and experience are primary, you may doubt them if they conflict too strongly with your reason and other sensation and experience, and look for more evidence either way, but without sensation and experience you have nothing to reason about.
"(2) regarding all or most ideas as innate rather than adventitious"; I have serious doubts that "innate ideas" even exist, except, possibly, for some evolutionarily programmed "ideas" (depending on how you define ideas).
"(3) an emphasis on certain rather than merely probable knowledge as the goal of enquiry"; how can a person emphasize certain knowledge over "merely probable" knowledge when all knowledge is to some extent probabalistic?
For coming up with better ideas, the next best thing to actually testing them, and many cannot be practically tested, is coming up with lots of different ideas so they can be easily compared.
This reminded me of another rationalist novel, Hal Clement's "Half Life". A future where most of the dwindling population is working frantically to find a cure for the proliferating diseases that are driving humanity towards extinction. There are several rules about how to present ideas to avoid premature evalution, one of them is - if you present a hypothesis you must also either include a way of testing it or include a second independent explanation.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2009-04-06T18:21:38.484Z · LW(p) · GW(p)
It's not a claim of what you're required to believe as a rationalist, it's simply an overview of what members of a certain school of rationalism used to believe.
comment by Roko · 2009-04-06T14:49:16.217Z · LW(p) · GW(p)
This seems to contradict the claim that the human mind is well adapted to its EEA. Is evolutionary psychology wrong? Maybe the creationists are correct after all writes Roko, implying that it is crucial for us to come up with an explanation
In protestation of my innocence, in this case we have an apparent falsification of standard evolutionary theory. We must be able to explain away an apparrent falsification or we should abandon evolution and go apologize to the discovery institute.
Not being falsified by the data is different from being able to predict every single data point.
Later on I was careless and didn't quite make it clear enough that my article's main point was avoiding the falsification, and explaining where the data point actually was was of secondary concern.
Replies from: AnnaSalamon, Annoyance, Kaj_Sotala↑ comment by AnnaSalamon · 2009-04-06T21:55:20.978Z · LW(p) · GW(p)
in this case we have an apparent falsification of standard evolutionary theory. We must be able to explain away an apparrent falsification or we should abandon evolution and go apologize to the discovery institute.
I think I’m misunderstanding your words here, Roko, so please don’t be offended. But if I’m understanding you correctly, I think you should reformulate “falsify” with probabilities. So that if Theory 1 implies that we’ll see underdog-dislike with e.g. a 99% probability, and underdog-liking with a 1% probability, we can say that observing underdog-liking decreases our credence in Theory 1, rather than falsifying Theory 1 full-stop.
Suppose that, after gathering together all LW-readers and thinking carefully through the issue, we decided that indeed, an unbiased observer who knew evolution and other facts of human psychology but who did not know our response to underdogs, would think it 99% likely that humans would dislike underdogs. (Assigning a probability as high as 99% sounds like absurd overconfidence, given both the difficulty of pulling high-probability predictions out of evolution in messy systems, and the thoroughness of analysis we can reasonably manage in an informal discussion group without experiments. But suppose.) Then, given that we indeed observe underdog-liking (at least in narratives, etc.), the observation of underdog-liking should indeed decrease our probabilistic estimate that evolution was true. But by how much?
Well, before we considered underdog-liking, Prob( observed biological data | evolution ) was unimaginably larger than Prob( observed biological data | creationism ), or given any other known hypothesis (er, I’m ignoring general theories like “evolution basically got it right, but with some as yet unknown set of errors we’ll need to fix”, to keep this analysis simple). Prob( observed biological data | evolution ) is sufficiently much greater that, even if Prob( underdog-liking | creationism ) = 1, and Prob ( underdog-liking | evolution ) = 0.01, the resulting Prob( observed biological data, and underdog-liking | evolution ) will still be unimaginably greater than Prob ( observed biological data, and underdog-liking | creationism ).
So the kind of probabilistic “falsification” we could get from observing underdog-liking in the face of a hypothetically strong evolutionary prediction against underdog-liking should decrease our credence in evolution, but not by a psychologically discernable amount, which makes talk of falsification misleading. Unless you suppose that the LW community could get evolutionary theory to make a “we don’t see underdog-liking” prediction with confidence much greater than 99%.
↑ comment by Kaj_Sotala · 2009-04-06T17:05:06.125Z · LW(p) · GW(p)
I'm not sure how this would have falsified standard evolutionary theory?