Faster Than Science
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-05-20T00:19:59.000Z · LW · GW · Legacy · 14 commentsContents
14 comments
I sometimes say that the method of science is to amass such an enormous mountain of evidence that even scientists cannot ignore it; and that this is the distinguishing characteristic of a scientist, a non-scientist will ignore it anyway.
Max Planck was even less optimistic:
"A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it."
I am much tickled by this notion, because it implies that the power of science to distinguish truth from falsehood ultimately rests on the good taste of grad students.
The gradual increase in acceptance of many-worlds in academic physics, suggests that there are physicists who will only accept a new idea given some combination of epistemic justification, and a sufficiently large academic pack in whose company they can be comfortable. As more physicists accept, the pack grows larger, and hence more people go over their individual thresholds for conversion—with the epistemic justification remaining essentially the same.
But Science still gets there eventually, and this is sufficient for the ratchet of Science to move forward, and raise up a technological civilization.
Scientists can be moved by groundless prejudices, by undermined intuitions, by raw herd behavior—the panoply of human flaws. Each time a scientist shifts belief for epistemically unjustifiable reasons, it requires more evidence, or new arguments, to cancel out the noise.
The "collapse of the wavefunction" has no experimental justification, but it appeals to the (undermined) intuition of a single world. Then it may take an extra argument—say, that collapse violates Special Relativity—to begin the slow academic disintegration of an idea that should never have been assigned non-negligible probability in the first place.
From a Bayesian perspective, human academic science as a whole is a highly inefficient processor of evidence. Each time an unjustifiable argument shifts belief, you need an extra justifiable argument to shift it back. The social process of science leans on extra evidence to overcome cognitive noise.
A more charitable way of putting it is that scientists will adopt positions that are theoretically insufficiently extreme, compared to the ideal positions that scientists would adopt, if they were Bayesian AIs and could trust themselves to reason clearly.
But don't be too charitable. The noise we are talking about is not all innocent mistakes. In many fields, debates drag on for decades after they should have been settled. And not because the scientists on both sides refuse to trust themselves and agree they should look for additional evidence. But because one side keeps throwing up more and more ridiculous objections, and demanding more and more evidence, from an entrenched position of academic power, long after it becomes clear from which quarter the winds of evidence are blowing. (I'm thinking here about the debates surrounding the invention of evolutionary psychology, not about many-worlds.)
Is it possible for individual humans or groups to process evidence more efficiently—reach correct conclusions faster—than human academic science as a whole?
"Ideas are tested by experiment. That is the core of science." And this must be true, because if you can't trust Zombie Feynman, who can you trust?
Yet where do the ideas come from?
You may be tempted to reply, "They come from scientists. Got any other questions?" In Science you're not supposed to care where the hypotheses come from—just whether they pass or fail experimentally.
Okay, but if you remove all new ideas, the scientific process as a whole stops working because it has no alternative hypotheses to test. So inventing new ideas is not a dispensable part of the process.
Now put your Bayesian goggles back on. As described in Einstein's Arrogance, there are queries that are not binary—where the answer is not "Yes" or "No", but drawn from a larger space of structures, e.g., the space of equations. In such cases it takes far more Bayesian evidence to promote a hypothesis to your attention than to confirm the hypothesis.
If you're working in the space of all equations that can be specified in 32 bits or less, you're working in a space of 4 billion equations. It takes far more Bayesian evidence to raise one of those hypotheses to the 10% probability level, than it requires further Bayesian evidence to raise the hypothesis from 10% to 90% probability.
When the idea-space is large, coming up with ideas worthy of testing, involves much more work—in the Bayesian-thermodynamic sense of "work"—than merely obtaining an experimental result with p<0.0001 for the new hypothesis over the old hypothesis.
If this doesn't seem obvious-at-a-glance, pause here and read Einstein's Arrogance.
The scientific process has always relied on scientists to come up with hypotheses to test, via some process not further specified by Science. Suppose you came up with some way of generating hypotheses that was completely crazy—say, pumping a robot-controlled Ouija board with the digits of pi—and the resulting suggestions kept on getting verified experimentally. The pure ideal essence of Science wouldn't skip a beat. The pure ideal essence of Bayes would burst into flames and die.
(Compared to Science, Bayes is falsified by more of the possible outcomes.)
This doesn't mean that the process of deciding which ideas to test is unimportant to Science. It means that Science doesn't specify it.
In practice, the robot-controlled Ouija board doesn't work. In practice, there are some scientific queries with a large enough answer space, that picking models at random to test, it would take zillions of years to hit on a model that made good predictions—like getting monkeys to type Shakespeare.
At the frontier of science—the boundary between ignorance and knowledge, where science advances—the process relies on at least some individual scientists (or working groups) seeing things that are not yet confirmed by Science. That's how they know which hypotheses to test, in advance of the test itself.
If you take your Bayesian goggles off, you can say, "Well, they don't have to know, they just have to guess." If you put your Bayesian goggles back on, you realize that "guessing" with 10% probability requires nearly as much epistemic work to have been successfully performed, behind the scenes, as "guessing" with 80% probability—at least for large answer spaces.
The scientist may not know he has done this epistemic work successfully, in advance of the experiment; but he must, in fact, have done it successfully! Otherwise he will not even think of the correct hypothesis. In large answer spaces, anyway.
So the scientist makes the novel prediction, performs the experiment, publishes the result, and now Science knows it too. It is now part of the publicly accessible knowledge of humankind, that anyone can verify for themselves.
In between was an interval where the scientist rationally knew something that the public social process of science hadn't yet confirmed. And this is not a trivial interval, though it may be short; for it is where the frontier of science lies, the advancing border.
All of this is more true for non-routine science than for routine science, because it is a notion of large answer spaces where the answer is not "Yes" or "No" or drawn from a small set of obvious alternatives. It is much easier to train people to test ideas, than to have good ideas to test.
14 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by Goplat · 2008-05-20T02:41:00.000Z · LW(p) · GW(p)
"it appeals to the (undermined) intuition of a single world. Then it may take an extra argument - say, that collapse violates Special Relativity"
So my intuition that there should be only one universe is useless, but your intuition that everything needs to be local (even though there are no time paradoxes involved in collapse like there would be if usable information could go back in time) is supposed to be a compelling argument?
These recent posts have been showing more rationalization than rationality.
comment by Richard_Hollerith2 · 2008-05-20T03:25:43.000Z · LW(p) · GW(p)
The fact that a great variety of experiments were done that might have found a nonlocal effect, but no nonlocal effect was ever found does not make you pause before you post that?
comment by JessRiedel · 2008-05-20T04:00:18.000Z · LW(p) · GW(p)
I definitely agree that there is truth to Max Planck's assertion. And indeed, the Copenhagen interpretation was untenable as soon as it was put forth. However, Everett's initial theory was also very unsatisfying. It only became (somewhat) attractive with the much later development of decoherence theory, which first made plausible the claim that no-collapse QM evolution could explain our experiences. (For most physicists who examine it seriously, the claim is still very questionable).
Hence, the gradual increase in acceptance of the MW interpretation is a product both of the old guard dying off and the development of better theoretical support for MW.
comment by Damien_R._S. · 2008-05-20T05:14:36.000Z · LW(p) · GW(p)
I hate that Planck quote. It's full of "truthiness". I think it is in fact falsified by the histories of relativity, quantum mechanics, and continental drift/plate tectonics. I'm pretty confident about the latter, trusting Hofstadter's class lectures more for the former two.
comment by Phillip_Huggan · 2008-05-20T05:59:38.000Z · LW(p) · GW(p)
Personally, I think the focus here on cognitive biases in decision making is biased in that it distracts from many other factors (education, info sources, personality, mild mental psychosis, the level of caffeine and sugar in one's blood, etc). If it helps to shed any light on the Popper-ian process of scientific consensus, I'll offer my own anecdote with the suggestion that the process he hypothesizes affects much more than science:
I could not believe in 2006 that the Chicago Bears would lose to the Colts. Even though the Colts had previously beaten a scarier aerial attack at had a revamped defence, I thought the Bears would take it.
Whatever K.Popper was describing; I don't know how true it is, is some sort of vindictive ego judgement call that extends far. Scientists are only highlighted here because they are falsely expected to be rational. In reality, their research is rational, not the process where they weigh their research against the research of other scientists. The latter is contaminated by sociology of some sort.
comment by ME3 · 2008-05-20T15:15:04.000Z · LW(p) · GW(p)
I think that I have only now really understood what Eliezer has been getting at with the past ten or so posts, this idea that you could be a scientist if you generated hypotheses using a robot controlled Ouija board. I think other readers have already said this numerous times, but this strikes me as terribly wrong.
First of all, good luck getting research funding for such hypotheses (and it wouldn't be fair to leave out funding from the description of Science if you're including institutional inertia and bias).
And I think we all know that in general, someone who used this method would never be able to get anywhere in academia, simply because they wouldn't be respected.
That, I think, teaches an important lesson. Individual scientists are not required to come up with correct or even plausible hypotheses because we all know that individual rationality is flawed. But the aggregate community of scientists and the people who fund them work together to evaluate the plausibility of a given hypothesis, and thereby effectively carry out the Bayesian analysis that Eliezer speaks of.
So one of many thousands of scientists can propose an utterly harebrained theory, and even spend his life on it if he wants, and it will barely register as a blip on the collective scientific radar. But when SR and GR were proposed, it was pretty much taken as a given that they were true, because they HAD to be true. I read somewhere that the experiment done by Eddington to verify the bending of light around the sun was far from accurate enough to actually be a verification of relativity. But it was still taken as a verification, because everyone was pretty much convinced anyway. And conversely, no matter how many experiments the cold fusion people do that show some unexpected effects, nobody takes them very seriously.
Now, you might say that this system is horribly inefficient, and many people say this on a regular basis. But here, the problem is simply that no individual human being can process that much information, and so the time it takes for a given data point to propagate through the community is very long. Of course, the internet helps, and if scientific journals were free, that would probably help also. But ultimately, I think this inefficiency is precisely the cost of a network evaluating all of the priors to find out the plausibility of a theory.
Of course, it also reduces a scientist to nothing more than a cog in a machine, and many people who want to be heroic can't deal with that. But in real life, no scientist is expected to evaluate his own hypothesis. They are expected to come up with a hypothesis, and try to verify it if they can get funding, and let the community decide to what extent the results are valid.
Replies from: jwflesh↑ comment by jwflesh · 2010-09-27T02:11:22.886Z · LW(p) · GW(p)
In real life a real scientist must test his own hypothesis and the hypotheses of others. They must devise and test a hypothesis which lends itself to specific predictions offering a means of testing its validity. All observation, in a special field of science, must be either for or against your hypothesis or my hypothesis, if the observation is to advance science. Science advances only by investigators who know how to disprove the empty theories and are already working on it. Science advances only by disproofs. It can take many years before the scientific community gets it.
comment by Nick_Tarleton · 2008-05-20T15:22:35.000Z · LW(p) · GW(p)
Suppose you came up with some way of generating hypotheses that was completely crazy - say, pumping a robot-controlled Ouija board with the digits of pi - and the resulting suggestions kept on getting verified experimentally. The pure ideal essence of Science wouldn't skip a beat. The pure ideal essence of Bayes would burst into flames and die.
Why? Methinks Bayes would eventually conclude there's some unexpected correlation between reality and the Ouija board.
(This goes for what ME said as well - if Ouija boards actually generated useful hypotheses, eventually scientists would wise up and start using them all the time.)
comment by Caledonian2 · 2008-05-20T15:46:49.000Z · LW(p) · GW(p)
Yes, but a Bayesian Orthodoxy would never give the Ouija boards a second look. Science is excellent at correcting mistaken beliefs, but if you let your beliefs determine exclusively what you'll look for, you'll never notice the discrepancies between your formed beliefs and the observations.
comment by poke · 2008-05-20T17:26:03.000Z · LW(p) · GW(p)
I think the problem with all rationality-based models of science is that they don't take scientific realism seriously enough. That's not surprising given that most of them were developed by philosophers in response to radical skepticism about the world and our ability to describe it.
Hypotheses are constrained by the world in three ways: (1) the hypothesis is, in the first place, constrained by a set of initial measurements and observations; (2) the hypothesis is constrained by the tools and skills available for framing and solving the problem, which are a product of previous scientific developments, which are in turn a product of physical reality; and (3) the hypothesis is constrained by prior theory, which was also subject to constraints 1-3 during its development, and is thus likewise constrained by physical reality. Hypotheses are also subject to sociological constraints but these are ultimately grounded in physical constraints. These are strong constraints and they exist regardless of whether you introduce a global normative constraint of rationality (Bayesian or otherwise).
I think any argument for global normative constraints in science should first attempt to demonstrate that the available physical and institutional constraints are insufficient. This needs to be done with reference to science as practiced rather than other equally ahistorical formal models of science. If we take scientific realism seriously, and believe the objects of science exist, then they themselves can explain the success of science in explaining the world. (Scientists are implicitly aware of this. If you ask a scientist, "How did you reach that conclusion?" they'll say "I did x, y and z" and list off the practical steps they took. If, however, you ask them, "What is the methodology of science?" they'll talk about skepticism or logical positivism or Popper's falsificationism or Kuhn's revolutions or whatever happens to be the fad at the time.)
An analogy: If you're the first to land on the East coast of a new island, it's hardly surprising that you and your decedents will also be the first to discover its inland mysteries as well as the North, South and West coasts; the geography of the island, as a continuous body of land, ensures that you can travel from one point to another and we need not posit some additional normative constraint that made your people Great Explorers. Likewise, it's reality that decides that you get Special Relativity if you turn the wheels on Newtonian dynamics enough, and not the alleged rationality of the researchers involved.
comment by Caledonian2 · 2008-05-21T14:02:39.000Z · LW(p) · GW(p)
One of the preconditions of becoming Great Explorers is that you have to explore, poke. If you are the first to discover the eastern coast, but never venture beyond that, you'll never find the northern, southern, or western.
If the researchers were not minimally rational, they would never have turned the wheels on Newtonian physics long enough to discover SR.
comment by [deleted] · 2011-06-21T01:41:30.669Z · LW(p) · GW(p)
I would be interested in developing a theory of saliency for scientific hypotheses. My current field, computer vision, has had some interesting results where saliency can be targeted for specific object types. For example, you could train a "people spotter" and a "bicycle spotter" and then go look at a scene. Both spotters will report false positives, etc., but the spotters give you some confidence (a) about whether the thing you want to find is even in the scene and (b) where in the scene to burn your resources when looking for it.
I'm not claiming it would be straightforward at all, but a conversion of this approach aimed at detecting salient ideas would seem to be the right direction. It raises some questions of immediate interest: what sort of feature vector should one extract to quantize an idea? There must be ways to describe a set of experiments, say, of maximum mutual information with respect to a set of hypotheses... that is, you can literally compute the experimental redundancy between a set of experiments and whittle them down to the ones that bear the maximal relevance w.r.t. some set of hypotheses.
comment by Nathan Spears (nathan-spears) · 2018-11-05T17:29:45.002Z · LW(p) · GW(p)
In between was an interval where the scientist rationally knew something that the public social process of science hadn't yet confirmed.
Isn't there another interval to consider in the process? There is (often) a point when the scientist's intuition is pushing him toward a hypothesis or an interpretation of data which has not yet been confirmed by his rationality. A flash of insight is required by the scientific process and yet is not accounted for.
It seems like a bit of a blind spot - if we had no more flashes of insight, the scientific process would grind to a halt, no?
comment by Егор Рябов (egor-ryabov) · 2024-08-29T19:15:01.113Z · LW(p) · GW(p)
The question has been cooking in me for quite some time to that I don't see an answer yet. It is a frequent pattern in sequences about quantum mechanics and Science that "there is no rational reason to even raise hypothesis X to attention or even assign it an actual probability". This often is supported by mentioning how large the answer space is. While I can kind of guess where all the alternatives at to the "stupid theory" of Eliezer18, I plainly don't see all the other alternative answers to the question of wavefunction collapse. While I agree that collapse is heavily penalized by Occam's razor and also for being a chicken among swans, this seems like a primary source of improbability to me rather than large answer space.
P.S. I factor for postulates of exact details under which collapse happens, like "it happens because of human observation". Such theories do belong to a quite obvious large answer space. I am mainly concerned with postulates like "collapse happens eventually, increasing its probability exponentially on the number of entangled particles". Which is experimentally falsifiable, but I don't see many analogous theories.