[SEQ RERUN] Changing the Definition of Science
post by MinibearRex
Today's post, Changing the Definition of Science was originally published on 18 May 2008. A summary (taken from the LW wiki):
Many of these ideas are surprisingly conventional, and being floated around by other thinkers. I'm a good deal less of a lonely iconoclast than I seem; maybe it's just the way I talk.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was No Safe Defense, Not Even Science, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
Comments sorted by top scores.
comment by Luke_A_Somers ·
2012-05-11T14:03:23.190Z · LW(p) · GW(p)
Science tolerates errors, Bayescraft does not.
Hrm. It seems to me that they're both fault tolerant in a technical sense; I guess this means as a normative rule? Scientists aren't exactly big fans of errors, and Science does enjoin to minimize them. Any reasonable Bayescraft would take the same stance.
comment by shminux ·
2012-05-11T02:23:57.049Z · LW(p) · GW(p)
(3) I think it is possible to do better in the real world. In the extreme case, a Bayesian superintelligence could use enormously less sensory information than a human scientist to come to correct conclusions. First time you ever see an apple fall down, you observe the position goes as the square of time, invent calculus, generalize Newton's Laws... and see that Newton's Laws involve action at a distance, look for alternative explanations with increased locality, invent relativistic covariance around a hypothetical speed limit, and consider that General Relativity might be worth testing.
Hmm, "real world" and "superintelligence" in the same breath...
Replies from: JoshuaZ
↑ comment by JoshuaZ ·
2012-05-11T02:36:50.760Z · LW(p) · GW(p)
The real problem here is more serious- even if one grants such a superintelligence, hypothesis space is extremely large. And it isn't clear why a superintelligence would immediately want to look for hypotheses that involved increased locality. Moreover, unless one has a lot more data (like say planetary orbits) one can't even get easy evidence for the idea of an inverse square law for gravitational strength, and that requires very careful observations (to a close approximation all the orbits of major planets are circles. It is only when one has a lot of good data over time that one sees that they are ellipses.) The paragraph and much of the rest of the essay is a combination of failure to appreciate how much information is necessary and a failure to appreciate the incredible size of hypothesis space in a way that seems similar to hindsight bias/ illusion of transparency.
Replies from: othercriteria
↑ comment by othercriteria ·
2012-05-11T14:27:03.974Z · LW(p) · GW(p)
It is only when one has a lot of good data over time that one sees that they are ellipses.
The case of Gauss computing the orbit of Ceres (which I am now surprised to find was not just a case of plug in the data and run least squares over a class of simple orbital models) suggests that intelligence coupled with the determination/capability to work through long chains of computation can substantially reduce the amount of data required for inference.
Replies from: JoshuaZ
↑ comment by JoshuaZ ·
2012-05-11T14:40:18.619Z · LW(p) · GW(p)
Gauss made that computation after he already had Newton's laws and Kepler's work behind him. He knew that the result had to be very close to an ellipse and that any deviation was going to be from nearby planets, and he knew the rough order of magnitude from that. If he had just had the small amount of data he had, and had no idea what the orbit should look like he wouldn't have been able to do so.
comment by Daermonn ·
2012-05-11T23:07:02.047Z · LW(p) · GW(p)
This is a good one. I definitely sympathize with Eliezer's point that Bayesian probability theory is only part of the solution. e.g., in philosophy of science, the deductive-nomological account of scientific explanation is being displaced by a mechanistic view of explanation. In this context, a mechanism is an organization of parts which is responsible for some phenomena. This change is driven by the inapplicability of D-N to certain areas of science, especially the biomedical sciences, where matters are more complex and we can't really deduce conclusions from universal laws; instead, people are treating law-like regularity as phenomena to be explained by appeal to the organized interactions of underlying parts.
e.g., Instead of explaining, "You display symptoms Y; All people with symptoms Y have disease X; Therefore, you have disease X," mechanists explain by positing a mechanism, the functioning of which constitutes the phenomena to be explained. This seems to me to be intimately related to Eliezer's "reduce-to-algorithm" stance; and that an appeal to reduce abstract beliefs to physical mechanisms seems to be a pretty good way to generalize his stance here. In addition, certain mechanistic philosophers have done work to connect mechanisms and mechanistic explanation with Bayesian probability, and with Pearl's work on Bayesian networks and causality. Jon Williamson at Kent has my favorite account: he uses Recursive Bayesian Networks to model this sort of mechanistic thinking quantitatively.
Replies from: DeeElf