# Einstein's Arrogance

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-09-25T01:29:57.000Z · score: 52 (57 votes) · LW · GW · Legacy · 86 commentsIn 1919, Sir Arthur Eddington led expeditions to Brazil and to the island of Principe, aiming to observe solar eclipses and thereby test an experimental prediction of Einstein’s novel theory of General Relativity. A journalist asked Einstein what he would do if Eddington’s observations failed to match his theory. Einstein famously replied: “Then I would feel sorry for the good Lord. The theory is correct.”

It seems like a rather foolhardy statement, defying the trope of Traditional Rationality that experiment above all is sovereign. Einstein seems possessed of an arrogance so great that he would refuse to bend his neck and submit to Nature’s answer, as scientists must do. Who can *know* that the theory is correct, in advance of experimental test?

Of course, Einstein did turn out to be right. I try to avoid criticizing people when they are right. If they genuinely deserve criticism, I will not need to wait long for an occasion where they are wrong.

And Einstein may not have been quite so foolhardy as he sounded . . .

To assign more than 50% probability to the correct candidate from a pool of 100,000,000 possible hypotheses, you need at least 27 bits of evidence (or thereabouts). You cannot expect to find the correct candidate without tests that are this strong, because lesser tests will yield more than one candidate that passes all the tests. If you try to apply a test that only has a million-to-one chance of a false positive (~ 20 bits), you’ll end up with a hundred candidates. Just *finding* the right answer, within a large space of possibilities, requires a large amount of evidence.

Traditional Rationality emphasizes justification: “If you want to convince me of X, you’ve got to present me with Y amount of evidence.” I myself often slip into this phrasing, whenever I say something like, “To *justify* believing in this proposition, at more than 99% probability, requires 34 bits of evidence.” Or, “In order to assign more than 50% probability to your hypothesis, you need 27 bits of evidence.” The Traditional phrasing implies that you start out with a hunch, or some private line of reasoning that leads you to a suggested hypothesis, and then you have to gather “evidence” to *confirm* it—to convince the scientific community, or justify saying that you *believe* in your hunch.

But from a Bayesian perspective, you need an amount of evidence roughly equivalent to the complexity of the hypothesis just to locate the hypothesis in theory-space. It’s not a question of justifying anything to anyone. If there’s a hundred million alternatives, you need at least 27 bits of evidence just to focus your attention uniquely on the correct answer.

This is true even if you call your guess a “hunch” or “intuition.” Hunchings and intuitings are real processes in a real brain. If your brain doesn’t have at least 10 bits of genuinely entangled valid Bayesian evidence to chew on, your brain cannot single out a correct 10-bit hypothesis for your attention—consciously, subconsciously, whatever. Subconscious processes can’t find one out of a million targets using only 19 bits of entanglement any more than conscious processes can. Hunches can be mysterious to the huncher, but they can’t violate the laws of physics.

You see where this is going: *At the time of first formulating the hypothesis*—the very first time the equations popped into his head—Einstein must have had, *already in his possession,* sufficient observational evidence to single out the complex equations of General Relativity for his unique attention. Or he couldn’t have gotten them *right.*

Now, how likely is it that Einstein would have *exactly* enough observational evidence to raise General Relativity to the level of his attention, but only justify assigning it a 55% probability? Suppose General Relativity is a 29.3-bit hypothesis. How likely is it that Einstein would stumble across *exactly* 29.5 bits of evidence in the course of his physics reading?

Not likely! If Einstein had enough observational evidence to single out the correct equations of General Relativity in the first place, then he probably had enough evidence to be *damn sure* that General Relativity was true.

In fact, since the human brain is not a perfectly efficient processor of information, Einstein probably had *overwhelmingly more evidence* than would, in principle, be required for a perfect Bayesian to assign massive confidence to General Relativity.

“Then I would feel sorry for the good Lord; the theory is correct.” It doesn’t sound nearly as appalling when you look at it from that perspective. And remember that General Relativity *was* correct, from all that vast space of possibilities.

## 86 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

Most theorists think they have the right theory but are wrong. So just because Einstein was right, that doesn't mean he had good reason to believe he was right. He could have been a lucky draw from the same process.

**[deleted]**· 2011-06-13T04:18:53.992Z · score: 8 (12 votes) · LW(p) · GW(p)

Indeed, I think theorists tend to make mistakes of either deductive or inductive bias. They start out tacitly assuming that reality must be some slightly noisy instantiation of a mathematical theorem ... that their favorite equations are *logically* true and for some mucky reason or another we just observe them as being noisily true.

From the post above:

To assign more than 50% probability to the correct candidate from a pool of 100,000,000 possible hypotheses, you need at least 27 bits of evidence (or thereabouts).

... *or* you just need to be that one guy who made a wild and unjustified guess about where to assign more than 50 % of the probability (despite *not* having bits of evidence to support it) and then be lucky.

This is true even if you call your guess a "hunch" or "intuition".

Only if you make the further assumption that whatever process that generates hunches or intuition must be decision-theoretic. That may not be a bad assumption, but I'm not convinced it's accurate in human beings. From my own readings about Einstein, I think it's more likely that he over-asserted the relevance of differential geometry and justified the pursuit of a theory along those lines with what is essentially *faith* in the mathematics. I don't think it was a subconscious extension of integrated evidence at all. For every Einstein whose hunch focused on the right general field of mathematics, there were probably dozens or hundreds of other physicists who just thought that burgeoning algebraic topology was the ticket, or perhaps non-standard analysis was the ticket, or perhaps representation theory was the ticket.

Many theories have been defended on grounds of beauty - and been wrong. Heliocentrism was an elegant theory that worked well and explained many things like the absence of naked-eye precession. Just before Einstein, we can find examples:

According to the vortex atomic theory originally proposed by William Thomson in 1867, atoms were nothing but vortical structures in the continuous ether. In this sense the atoms were quasi-material rather than material bodies. As the ultimate and irreducible quality of nature, the ether could exist without matter, but matter could not exist without the ether....By the early 1890s the vortex atomic theory had run out of steam and was abandoned by most researchers as a realistic theory of the constitution of matter. It was never unambiguously proved wrong by experiment, but after twenty years of work it degenerated into mathematics, failing to deliver what it promised of physical results. Physicists simply lost confidence in the theory. On the other hand, although the vortex atom was no longer considered a useful concept in explaining physical phenomena, heuristically and as a mental picture it lived on. Wrong as it was, to many British physicists it remained a methodological guiding principle, the ideal of what a future unified theory of matter and ether should look like. According to Michelson, writing in 1903, it “ought to be true even if it is not” (Kragh 2002: 80).

"And remember that General Relativity was correct, from all the vast space of possibilities."

The Einstein field equation itself is actually extremely simple:

G = 8*pi*T

where G is the Einstein tensor and T is the stress-energy tensor. Few serious competitors to GR have emerged for a very good reason; what sane modifications could you make to this equation? G and T have to be directly proportional, because everyone knows that the curvature of spacetime (and hence the effect of gravity) is directly proportional to the quantity of matter/energy. The constant of proportionality is fixed by direct measurement of g. G must vanish when T vanishes, as there must be no gravity in the absence of matter. T itself cannot be modified, because it's the only sane way to measure mass, energy, and momentum in the Lorentzian manifold framework. G cannot be modified, because it must be constructable from the metric tensor (a property of spacetime), it must be directly proportional to the amount of curvature, and it must be invariant with respect to the choice of coordinate system (the full derivation is left as an exercise to the reader in my textbook).

Hanson, that's why I picked Einstein - he'd already been "lucky" once at that point. Also, he would still need quite a lot of evidence just to get to the point of having a *remote* chance of being right.

McCabe, you're right, it's completely obvious, it makes you wonder why Einstein took ten years to figure it out.

Not at all *obvious*, but there are very few hypotheses that could be specified as briefly. What took ten years was figuring out how to get from the very short specification into an algebraic expression that satisfied its constraints.

A bit like, if the theory was 'just' Fermat's Last Theorem, proving it could take a while.

**[deleted]**· 2012-08-05T22:40:39.567Z · score: 1 (1 votes) · LW(p) · GW(p)

McCabe, you're right, it's completely obvious, it makes you wonder why Einstein took ten years to figure it out.

Doesn't that apply to the MWI too?

Tom, is that an elaborate joke?

I agree with Tom that there isn't that much room to change the field equations once you have decided on the Riemannian tensor framework: gravity cannot be expressed as first-order differential equations and still fit with observation, while number of objects to build a set of second-order equations is very limited. The equations are the simplest possibility (with the cosmological constant as a slight uglification, but it is just a constant of integration).

But selecting the tensor framework, that is of course where all the bits had to go. It is not an obvious choice at all.

It is interesting to note that Einstein's last paper, "On the relativistic theory of the non-symmetric field" includes a discussion of the "strength" of different theories in terms of how many undetermined degrees of freedom they have. http://books.google.com/books?id=tB9Roi3YnAgC&pg=PA131&lpg=PA131&dq=%22relativistic+theory+of+the+non+symmetric+field%22&source=web&ots=EkMv5tudsI&sig=lkTQE94Ay1h2-qS0mcbGT3xa22M If I recall right, he finds his own theory to be rather flabby.

Um, guys, there are an infinite number of possible hypotheses. Any evidence that corroborates one theory also corroborates (or fails to refute) an infinite number of alternative specifiable accounts of the world.

What evidence does is allow us to say "Whatever the truth is, it must coexist in the same universe with the true nature of this evidence I have accepted. Theory X and its infinite number of variants seems to be ruled out by this evidence (although I may have misinterpreted the theory or the nature of the evidence), whereas Theory Y and its infinite number of variants seems not yet to be ruled out."

Yeah, I realize this is a complicated way to phrase it. The reason I like to phrase it this way is to point out that Einstein did not have merely 29 "bits" of evidence, he had VAST evidence, based on an entire lifetime of neuron-level programming, that automatically focused his mind on a productive way of thinking about the universe. He was imagining and eliminating vast swaths of potential theories of the universe, as are we all, from his earliest days in the womb. This is hardly surprising, considering that humans are the result of an evolutionary process that systematically killed the creatures who couldn't map the universe sufficiently well.

We can never know if we are getting to the right hypothesis. What we can say is that we have arrived at a hypothesis that is isomorphic with the truth, as we understand that hypothesis, over the span of evidence we think we have and think we understand. Always the next bit of evidence we discover may turn what we think we knew upside down. All knowledge is defeasible.

There are not an infinite number of possible hypotheses in a great many sensible situations. For example, suppose the question is "who murdered Fred?", because we have already learned that he was murdered. The already known answer: "A human alive at the time he died.", makes the set finite. If we can determine when and where he died, the number of suspects can typically be reduced to dozens or hundreds. Limiting to someone capable of carrying out the means of death may cut 90% of them.

To the extent that "bits" of evidence means things that we don't know yet, the number of bits can be much smaller than suggested. To the extent that "bits" of evidence includes everything we know so far, we all have trillions of bits already in our brains and the minimal number is meaningless.

What about the aliens who landed on earth, murdered Fred and then went away again? Or the infinite number of other possibilities, each of which has a very small probability?

What confuses me about this is that, if we do accept that there are an infinite number of possibilities, most of the possibilities must have an infinitesimal probability in order for everything to sum to 1. And I don't really understand the concept of an infinitesimal probability -- after all, even my example above must have some finite probability attached?

Being as, at any one time, the universe only has a finite space about any point that can be reached at sub-speed of light times. As a result there is only a finite amount of matter and, furthermore, possibility that can happen at the point where Fred died. This limits us to finite probabilities of discrete events.

Were your case possible and we were talking about continuous probabilities it would be the case that any one event is impossible; an "area" in probability space between two limiting values (events in probability space) would give you a discrete probability. You're issue is one that I had issues with until I really sat and thought about how integrals work.

FYI: everything I have said is essentially based on my understanding of special relativity, probability and calculus and are more than open to criticism.

The probability that the universe only has finite space is not exactly 1, is it? Much more might exist than our particular Hubble volume, no? What probability do the, say, world's top 100 physicists assign, on average, to the possibiliy that infinitely much matter exists? And on what grounds?

To my understanding, the universe might be so large that everything that could be described with infinitely many characters actually exists. That kind of "TOE" actually passes the Ockham's razor test excellently; if the universe is that large, then it could (in principle) be exhaustively described by a very simple and short computer program, namely one that produces a string consisting of all the integers in order of size: 110111001011101111000... ad infinitum, translated into any wide-spread language using practially any arbitrarily chosen system for translation. Name anything that could exist in any universe of countably infinite size, and it would be fully described, even at infinitely many places, in the string of characters that such a simple computer program would produce.

Why not assign a pretty large probability to the possibility that the universe is that large, since all other known theories about the size of the universe seem to have a harder time with Ockham's razor?

"The probability that the universe only has finite space is not exactly 1, is it?"

Nooooo, that's not it. The probability that the *reachable space* from a *particular point* within a *certain time* is finite is effectively one.

So it doesn't matter how large the universe is - the aliens a few trillion ly away *cannot* have killed Bob.

Just to point out what may be a nitpick or a clarification. It's perfectly possible for infinity many positive things to sum to a finite number. 1/2+1/4+1/8+...=1.

There can be infinitely many potential murderers. But if the probability of each having done it drops off fast enough you can avoid anything that is literally infinitesimal. Almost all will be less than 1/3^^^^^^3 of course, but that's a perfectly well defined number you know how to do maths with.

Hate to nitpick myself, but 1/2+1/4+1/8+... diverges (e.g., by the harmonic series test). Sum 1/n^2 = 1/4 + 1/9 + ... = (pi^2)/6 is a more fitting example.

An interesting question, in this context, is what it would mean for infinitely many possibilities to exist in a "finite space about any point that can be reached at sub-speed of light times." Would it be possible under the assumption of a discrete universe (a universe decomposable no further than the smallest, indivisible pieces)? This is an issue we don't have to worry about in dealing with the infinite sums of numbers that converge to a finite number.

How about we put it this way: In the infinite space of possible theories, most of them are far too complex to ever have enough evidence to locate. (If it takes 3^^^3 bits of information to verify the theory... you're never going to verify the theory.)

In realistic circumstances, we have really a quite small list of theories to choose from, because the list of theories that we are capable of comprehending and testing in human lifetimes is itself very small.

**[deleted]**· 2012-06-12T02:46:47.076Z · score: -2 (4 votes) · LW(p) · GW(p)

Your comments are clogging up the recent comments feed. I normally wouldn't mind, but your comments are often replies to comments made several years ago by users who no longer post. Please be mindful of this when posting. Thanks!

I normally wouldn't mind, but your comments are often replies to comments made several years ago by users who no longer post.

This is fine - if the comments provide useful insight (they don't in this case). We encourage (productive) thread necromancy.

Many popular reports of Eddington's test mislead people into thinking it provided significant evidence. See these two Wikipedia pages for reports that the raw evidence was nearly worthless. Einstein may have known how little evidence that test would provide.

"McCabe, you're right, it's completely obvious, it makes you wonder why Einstein took ten years to figure it out."

I never said it was obvious; I said that the equations were a unique solution imposed by various constraints. Proving that the equations are a unique solution is quite difficult; I can't do it, even with a ready-made textbook in front of me. There are many examples of simple, unique-solution equations being very hard to derive- Newton's law of gravity and Maxwell's laws of electromagnetism come to mind.

"But selecting the tensor framework, that is of course where all the bits had to go. It is not an obvious choice at all."

I agree that it is not at all obvious, but the search space doesn't seem to be all that large- how many mathematical toys are there which could form a viable framework for gravity? The difficulty seems to be in understanding the math well enough to determine whether it can represent real-world phenomena. Differential geometry is not a simple Bayesian hypothesis like "the cat is blue"; to figure out whether piece of evidence Q supports a geometric theory of gravity, you have to understand what a geometric theory of gravity would look like (in Bayesian terms, which outcomes it would predict), which is quite difficult.

"Tom, is that an elaborate joke?"

No. What makes you think that?

*The Einstein field equation itself is actually extremely simple:*

G = 8*pi*T

Sure, if we don't mind that G and T take a full page to write out in terms of the derivatives of the metric tensor. By this logic *every* equation is extremely simple -- it simply asserts that A=B for some A,B. :-)

http://mathoverflow.net/questions/53122/mathematical-urban-legends

Another urban legend, which I've heard told about various mathematicians, and which Misha Polyak self-effacingly tells about himself (and therefore might even be true), is the following:

As a young postdoc, Misha was giving a talk at a prestigious US university about his new diagrammatic formula for a certain finite type invariant, which had 158 terms. A famous (but unnamed) mathematician was sitting, sleeping, in the front row. "Oh dear, he doesn't like my talk," thought Misha. But then, just as Misha's talk was coming to a close, the famous professor wakes with a start. Like a man possessed, the famous professor leaps up out of his chair, and cries, "By golly! That looks exactly like the Grothendieck-Riemann-Roch Theorem!!!" Misha didn't know what to say. Perhaps, in his sleep, this great professor had simplified Misha's 158 term diagrammatic formula for a topological invariant, and had discovered a deep mathematical connection with algebraic geometry? It was, after all, not impossible. Misha paced in front of the board silently, not knowing quite how to respond. Should he feign understanding, or admit his own ignorance? Finally, because the tension had become too great to bear, Misha asked in an undertone, "How so, sir?" "Well," explained the famous professor grandly. "There's a left hand side to your formula on the left." "Yes," agreed Misha meekly. "And a right hand side to your formula on the right." "Indeed," agreed Misha. "And you claim that they are equal!" concluded the great professor. "Just like the Grothendieck-Riemann-Roch Theorem!"

**[deleted]**· 2012-08-03T23:00:37.926Z · score: 0 (0 votes) · LW(p) · GW(p)

Sure, if we don't mind that G and T take a full page to write out in terms of the derivatives of the metric tensor.

Yeah, but there are only three objects you can write in terms of the metric tensor which transform the “right” way (G, T, and g itself). So the most general equation which satisfies those transformation laws is aG + bT + cg = 0.

Now, a is non-zero (otherwise you get an universe where there's no matter/energy other than “dark energy”), so by redefining b and c we have G + bT + cg = 0; b is negative (because things attract each other rather than repelling each other) and we call it -8*pi*G/c^4 (it's just a matter of choice of units of measurement; we might as well set it to 1); and c is the cosmological constant.

I doubt that Scott will reply to this, 5 years later and on a different site, so let me try instead.

there are only three objects you can write in terms of the metric tensor which transform the “right” way (G, T, and g itself).

Hindsight bias? There are plenty of such objects. Google f(R) gravity, for example. There are also many different contractions of powers of products of R, T and G that fit. There is also torsion, and probably other things (supergravity and string theory tend to add a few).

You might want to argue that G=T is "the simplest", but it is anything but, for the reasons Scott explained. Once you find something that works, you call it G and T, write G=T and call it "simple". That's what Einstein did, since his first attempt, R=T, did not work out.

"Sure, if we don't mind that G and T take a full page to write out in terms of the derivatives of the metric tensor."

The Riemann tensor is a more natural measure of curvature than the metric tensor, and even in that language it's still pretty simple:

8*pi*T = R (tensor) - .5*g*R (scalar)

where R (tensor) (subscript) ab = Riemann tensor (superscript) c (subscript) acb and R (scalar) = g (superscript) ab * R (tensor) (subscript) ab

You can make any theory seem complicated by writing it out in some nonstandard format. Take Maxwell's equations of electromagnetism in tensor form:

dF = 0
d*F = 4*pi*J

Now differential form:

(divergence) E = p (divergence) B = 0 (curl) E = -dB/dt (curl) B = J + dE/dt

Now integral form:

(flux E over closed surface A) = q (flux B over closed surface A) = 0 (line integral of E over closed loop l) = - d (flux of B over surface enclosed by l)/dt (line integral of B over closed loop l) = (current I passing through surface enclosed by l) + d (flux of E over surface enclosed by l)/dt

Now in action-at-a-distance form:

E = (sum q) -q/4/pi * ((r' unit vector from q)/r'/r' + r' * d/dt ((r' unit vector from q)/r'/r') + d^2/dt^2 (r' unit vector from q))
B = (sum q) E x -(r' unit vector from q)

Your R is actually the Ricci tensor, not the Riemann tensor. The Riemann tensor has four indices, not two. The Ricci tensor is formed by contracting the Riemann tensor on its first and third indices.

Tom, I expect to hear from you soon on the many new amazing physics discoveries you will generate using your insight that most previous physics problems have had obvious unique solutions. If only you had been around to solve the problem instead of Maxwell and Einstein, how much work could have been saved!

I thought that, when you try to apply general relativity to a world described by quantum mechanics, you end up trying to measure curvature of surfaces that do not have a well-defined curvature, much like how the curvature (derivative) of y = |x| is undefined at x=0?

I've heard several different descriptions of the "contradictions" between quantum mechanics and general relativity. One is that the mathematical functions used to define general relativity are undefined on the type of spacetime described by quantum mechanics; naively trying to apply one to the other requires you to find limits that do not exist (or something like that). Another explanation said that yes, you can create a quantum theory of gravity using a "naive" approach, but such a theory requires an infinite number of arbitrary physical constants and is therefore completely useless because 1) you can't actually measure an infinite number of physical constants and 2) if you don't measure them, the proper "choice" of constants can give you any result whatsoever, so it can't make any predictions about the actual universe.

By the way, has anyone else here had the thought that the reason quantum mechanics and general relativity are contradictory yet seem to predict reality perfectly is that "there's a bug in the code"?

The mathematical inconsistency between quantum mechanics and general relativity illustrates a key point. Most of the time the hypothesis set for new solutions, rather than being infnite, is null. It is often quite easy to illustrate that every available theory is wrong. Even if we know that our theory is clearly inconsistent with reality, we still keep using it until we come up with something better. Even if General Relativity were contradicted by some experimental discovery in 1963, Einstein would still have been lauded as a scientist for finding a theory that fit more data points that the previous one.

In science, and in a lot of other contexts, simply showing that a theory could be right, is much more important the establishing to any degree of statistical significance that it is right.

"If only you had been around to solve the problem instead of Maxwell and Einstein, how much work could have been saved!"

Obvious != simple != easy to learn. You of all people should understand this. You seemed to understand it seven years ago, back during the days of your wild and reckless youth. To quote SitS:

"Let's take a concrete example, the story Flowers for Algernon (later the movie Charly), by Daniel Keyes. (I'm afraid I'll have to tell you how the story comes out, but it's a Character story, not an Idea story, so that shouldn't spoil it.) Flowers for Algernon is about a neurosurgical procedure for intelligence enhancement. This procedure was first tested on a mouse, Algernon, and later on a retarded human, Charlie Gordon. The enhanced Charlie has the standard science-fictional set of superhuman characteristics; he thinks fast, learns a lifetime of knowledge in a few weeks, and discusses arcane mathematics (not shown). Then the mouse, Algernon, gets sick and dies. Charlie analyzes the enhancement procedure (not shown) and concludes that the process is basically flawed. Later, Charlie dies.

That's a science-fictional enhanced human. A real enhanced human would not have been taken by surprise. A real enhanced human would realize that any simple intelligence enhancement will be a net evolutionary disadvantage - if enhancing intelligence were a matter of a simple surgical procedure, it would have long ago occurred as a natural mutation. This goes double for a procedure that works on rats! (As far as I know, this never occurred to Keyes. I selected Flowers, out of all the famous stories of intelligence enhancement, because, for reasons of dramatic unity, this story shows what happens to be the correct outcome.)

Note that I didn't dazzle you with an abstruse technobabble explanation for Charlie's death; my explanation is two sentences long and can be understood by someone who isn't an expert in the field. It's the simplicity of smartness that's so impossible to convey in fiction, and so shocking when we encounter it in person. All that science fiction can do to show intelligence is jargon and gadgetry. A truly ultrasmart Charlie Gordon wouldn't have been taken by surprise; he would have deduced his probable fate using the above, very simple, line of reasoning. He would have accepted that probability, rearranged his priorities, and acted accordingly until his time ran out - or, more probably, figured out an equally simple and obvious-in-retrospect way to avoid his fate. If Charlie Gordon had really been ultrasmart, there would have been no story. "

We know that Newton's theory of gravity was hard to invent; it *must* not have been obvious, because nobody had solved it until Newton, and he was lauded as a hero for his great theory. And yet, it is so simple that we teach it to high school students, and some of them actually understand it. Newton's equation is also a unique solution; the constant of proportionality is fixed by experiment, the m/r^2 term is fixed by the need to include Kepler's laws (which were well known at the time), and extra terms are excluded, because F must vanish when M2 vanishes, or else you violate the laws of motion which Newton had just discovered.

In other words: Einstein also said that God does not play dice with the universe. However, not only does God play dice, but sometimes he ignores the result and just says it worked.

"Fixed by evidence" != "simple". There are few alternatives to Newton's Laws, perhaps, once you (a) invent calculus as the language of description, the interpreter to run the code; (b) observe Kepler's laws; (c) realize that objects in motion remain in motion unless a force acts upon them, as opposed to Aristotle's view, and therefore the law should be written in second derivatives as opposed to third or first derivatives; etc. etc.

Please recall that my original contention was that Einstein must have had enough observational evidence to fix the information inherent in General Relativity as a solution. If you describe ways that the information in General Relativity can be fixed by evidence, you are not contradicting this.

You are also falling prey to hindsight by not making an equal effort to consider how you could have justified alternatives as unique obvious solutions using subsets of other knowledge known at the time, rather than the particular aspects that now obviously seem so prominent.

"Please recall that my original contention was that Einstein must have had enough observational evidence to fix the information inherent in General Relativity as a solution. If you describe ways that the information in General Relativity can be fixed by evidence, you are not contradicting this."

True; why do you have to contradict the main point of a post to comment on it? My point was that the space of possibilities was not vast; it was quite small, given the common-sense rules of gravity and math which were known at the time. Developing GR took years, not because Einstein has to sort through ten million different versions of the theory, but because developing a single version of the theory is difficult.

"You are also falling prey to hindsight by not making an equal effort to consider how you could have justified alternatives as unique obvious solutions using subsets of other knowledge known at the time, rather than the particular aspects that now obviously seem so prominent."

This is mathematically impossible unless you assume false knowledge. If equations (A, B, C, D, E) are known at the time of Newton, and Newton's theory of gravity is unique if you assume A, C and D, then any alternative theory of gravity must contradict A, C, or D. Suppose that you can construct an alternative theory of gravity, which is unique assuming equations B and E. If you assume that both B and E are true, then the alternative theory of gravity must be true, hence Newton's theory must be false, hence either A, C, or D must be false. We know now that A, C, and D are all true, therefore, either B or E must be false.

""Fixed by evidence" != "simple"."

This is certainly true in the general case, but all physics theories which I've studied in detail really are simple, in the bits of entropy sense.

Reading that worthless tripe took up three minutes of my life that I'll never get back. Thanks reddit!

That waste of three minutes wasn't your fault. But the decision to sink more time into posting a comment that obviously won't do any good (not least because it's completely unspecific) was.

Guys there's something else worth mentioning here.
Einstein had had different conviction about theories. Briefly, in his idealistic ecumenical thoughts, he referred that a real 'Theory' *should* be articulated and conceived ipso facto, without any evidence whatsoever, before the *observations* can corroborate the theories predictions.

In his own context Einstein you know used to devise the intense 'Thought Experiments', something so insightful, ideation of which can only be possible in an Einstein's brain nerves. The slew of scientific developments taking place majorly before/after Relativity era had a different flavor: from Photoelectric effect tests to sprouting of Quantum Physics theories...you name it, all involved observation-research-theory evolution.

However, in only the case of General Relativity, there were NO experimental foundation put forth with the theory. The REAL evidences (1959+), [ not the Eddington one...that was essentially farce and exaggerated! ] came must after theory postulation (1919), and are still coming... This is where Einstein is marvelous than it is thought to be. The beauty of a Theory is determined by its life it lives...Newton's ones lived three centuries. Einstein one has survived one...and counting.

In fact it has been referred that Einstein work on General Relativity during 1914-19 is a period of 'the greatest intellectual human endeavor by a single brain' Refer Clark's Biography for more.

So there we are Eliezer, its not just about bits and observation for something to be conceived and articulated.

+Thanks.

Abraham Pais, one of Einstein's many friends, has said that Einstein loved to joke. Are you sure his "sorry for the good Lord" wasn't a bit of humor?

I've always assumed that it was a joke. If he'd been serious, then he'd have felt sorry for *Eddington*.

Just as no significant algebra can be both complete and consistent, we can expect that in our future, someone standing on Einstein's shoulders will "correct" his equations the same way that his expanded upon Newton's.

Scientific theories are never proved correct; at best they are merely not disproved by any tests run against them; and have some utility or other attraction (e.g., "beauty.") Odd that this group would say Einstein was proved correct, in an article about how Lord Eddington was merely failing to propose a test with enough power to disprove it.

I would suggest here where Einstein got his evidence. General relativity started from a simple assumption: that inertial mass and gravitational mass are the same. Before Einstein, this was a mere observation, and nobody had really asked themselves why it was so (I'm oversimplifying here of course). But Einstein stated this as a fundamental principle, an axiom if you want. And then he went on to draw what logical conclusion could be drawn out from that basic axiom. Sure it took him ten years, because it wasn't obvious at all, and the mathematical tools to do the work were relatively new and obscure. But Einstein never faltered from his initial hypothesis.

And there WAS overwhelming evidence that inertial mass and gravitational mass were the same. Nobody knew for sure if they were EXACTLY the same, but they were sufficiently similar to support Einstein's hypothesis that they were, indeed, exactly the same.

So in Einstein's mind, the fact that posing gravitational mass equal to inertial mass led, logically, to the final conclusion in terms of general relativity, plus the fact that a vast amount of evidence pointed to the two being indeed equal, all that was enough for him to have confidence in the theory. Eddington's measurement was a very difficult one, and the results far from conclusive as has been shown elsewhere. Einstein had every reason to believe that a failure by Eddington to confirm his theory would in no way falsify it (mind you, this was way before Popper and Kuhn!...).

That's my two bit of explanation here. I used to be much more familiar with the history of general relativity but that was some time ago. Maybe re-reading Pais would help confirm or refute this idea.

Something doesn't feel right. Don't people frequently propose complex theories that turn out to be wrong?

Tarleton, people do propose lots of complex wrong theories, but they don't propose literally quintillions of wrong complex theories for every right complex theory. If the ratio is even ten wrong to one right, you can tell the good guessers must have possessed massive evidence - survivorship bias is not remotely enough to account for it. As for the wrong guessers, they are more likely to have suffered from bad evidence or bad thinking, than from having almost exactly enough evidence processed correctly followed by a wrong guess.

It's years since this thread came up, but just my two cents on this suggestion.

Correct me if I'm significantly wrong, but I think your premise is that overwhelming evidence is first assembled in a good theoretician's brain, is logically processed into a theory, and then the correct theory is presented and found correct by virtue of this process. The crucial process was that they had to accumulate enough pieces of evidence in accord with the theory to select it, since you believe information theory prohibits any other ways of going about this business.

The thing is, if we follow the line of argument that the number of pieces of information must correspond to the number of possible hypotheses, then surely you would need an infinite number of pieces of information because the number of possible hypotheses (possible statements about the universe) is infinite too.

If you argue that it is a finite number, surely you are suggesting that a gigantic number of hypotheses have been removed in pre-selection based on how relevant they appear. If pre-selection occurs, you must also be open to the possibility that the number of possible hypotheses is far less than an arbitrary 100,000,000 and even single-digits. I think the whole accumulated mountain of science actually exists such that you do not need to generate your entire theory from an infinite number of possibilities, but judge from between a grossly-reduced number, the reducing of the vast number of other possibilities having been done by the work of your predecessors.

So if it satisfies you more, there may have been a huge number of possible theories at the start of humankind, but the combined weight of human experience and endeavour has whittled them down in certain areas to numbers which can be distinguished between by great scientists. In other areas, such as the precise nature of consciousness, we are just as baffled as before!

The thing is, if we follow the line of argument that the number of pieces of information must correspond to the number of possible hypotheses, then surely you would need an infinite number of pieces of information because the number of possible hypotheses (possible statements about the universe) is infinite too.

See Bayesian probability, Occam's razor, Kolmogorov complexity, Solomonoff induction. You only need to raise a certain hypothesis in probability above the alternatives, not exclude all other hypotheses with certainty.

I like to think Einstein's confidence came instead from his belief that Relativity suitably justified the KL divergence between experiments in 1905 and physics theory in 1905. He was not necessarily in full possession of whatever evidence was required to narrow the hypothesis space down to relativity (which is a bit of a misformulation, I feel, since this space still contains a number of other theories both equally and more powerful than Physics+Relativity) but instead possessed enough so that in his own mental metropolis jumping he stumbled across Relativity (possibly the next closest convenient point climbing from the prior of Physics to the posterior including new evidence for the time) and sat there.

His comment just reflected a belief that new experiments were unlikely to yet be including the same new information he already used. In some sense, their resolution was not yet strong enough to pinpoint something more precise than Relativity.

Not to knock Einstein, of course. Just because you have new evidence drawing you to a different posterior hypothesis doesn't mean that the update is going to be easy. That's perhaps where the philosophy of Bayes runs into the computational limitations of today.

The point of science isn't just to gather evidence. It's to gather evidence without bias. Whatever Einstein was doing looked like really good evidence to him, but he couldn't have been sure it was good evidence. It might have just looked like good evidence. Science works by just ignoring all the evidence that might be biased. You're ignoring a lot of evidence there, but so long as you can gather evidence relatively cheaply, that's not much of a problem.

Suppose Einstein had a 50% chance of being significantly biased. He only gathered 20 bits of evidence instead of 30. If he is biased, GR is very likely wrong. If he isn't, he's very likely right. There's a probability of almost exactly 50% that GR is true. It's definitely worth looking into, but it certainly isn't overwhelming.

The testing isn't so much to prove that GR is true, as to prove that Einstein was judging the probabilities accurately when he theorized GR.

Was Einstein really correct? This link http://www.infinite-energy.com/iemagazine/issue87/hoax.html

Lead me to the following: http://www.scientificexploration.org/journal/jse_13_2_mccausland.pdf

From where I quote:

The results of the test were claimed as a triumph for Einstein, and his world- wide fame dates from this event. Ironically enough, attempts made at later eclipses sug- gest that Eddington underestimated the uncertainties inherent in such a difficult obser- vation, and even today Einstein’s prediction has not been tested with the precision one would wish.

The clocks on GPS satellites must be corrected for relativistic time dilation.

There is some controversy in this respect:

http://www.amazon.com/Escape-Einstein-Ronald-R-Hatch/product-reviews/0963211307/

The author is one of the world's foremost experts on the Global Positioning System and a former president of the Institute of Navigation. His book discusses GPS satellite data that contradicts Einstein's relativity theories and proposes his own Modified Lorentz Ether Gauge Theory (MLET) as a replacement for Einstein's relativity. It agrees at first order with relativity but corrects for certain astronomical anomalies not explained by relativity theory.

My point was that flaws in Eddington's experiments are infinitesimally weak evidence of flaws in GR. I have no opinion on successors to the theory.

Ooops, you are violating the conservation of expected evidence.

I think you should probably pick a different example - a book published by a guy in 1992 that has one review on Amazon?

ETA: And the review is by some guy who tends to positively review books that discuss that special relativity is incorrect?

It's worth noting that the Michelson-Morley experiment - designed in 1887 specifically to test the idea of light traveling through an ether (the prevailing theory before relativity), which would cause some sort of ether drift - came up essentially null. It placed an upper limit of 30 km/s on the speed of an ether, far lower than it would need to be if it truly existed. Each subsequent experiment that was performed using more accurate equipment has lowered this upper limit. The current upper limit of anisotropy in a single direction (that is, the most the speed of light can vary between two observers) based on an M-M style experiment is 0.9 m/s, and bidirectional anisotropy is 2x10^-13 m/s. More recent alternative experimental procedures have reduced these figures even further.

Unentrained ether (i.e. doesn't interact with matter or gravity) should have an ether drift exactly matching the speed of the earth's rotation around the sun, or 108,000 km/s. M-M style experiments and many others have conclusively proven this idea false. While an entrained or partially entrained ether (matter and/or gravity drag the ether along to some degree) can have a drift considerably lower, one still expects there to be a significant difference in the observed speed of light for two different observers. Also, it has it's own problems that an unentrained ether does not have - namely that light must slow down over time in an entrained ether, even when traveling through a vacuum. This is clearly not observed.

In other words, Hatch's theory was proven false a century before he proposed it, and further experiments on the subject (motivated by the search for a quantum theory of gravity) have only served to drive ever larger nails in its coffin.

In other words, Hatch's theory was proven false a century before he proposed it,

Proven false a century before he proposed it? That's... well, not exactly surprising but definitely embarrassing!

It certainly is!

This was actually very present in my mind when I read about Hatch's idea, because I'm currently reading Hawking's "A Brief History of Time", and he goes into the proof against ether. Of course I'd heard of it before, but since nobody else mentioned it and I had been thinking about it already, I simply had to.

It's amazing what people who haven't learned their physics think about physics! It reminds me of another new version of an old theory of physics I read about on one of these posts - a new expansion theory of physics, which would require us to throw away everything since Newton. Pretty arrogant, and the explanations for it amounted to declarations by fiat that it was so - there was no evidence for it at all, and in fact require great convoluted maths in order to explain our observations. The author of that particular theory was apparently unaware of Newton's inverse-square law of gravity, nor Einstein's more accurate formula for the same, and the extensive experiments demonstrating it over the last couple hundred years.

Note that I'm not saying *I* have learned my physics even as well as these misguided would-be physicists have, I'm just saying I would never propose a new theory of physics without be certain it hadn't already been conclusively disproven!

There have been literally thousands of confirmations of gravitational lensing - hell cosmologists have created a 3D map of dark matter based on gravitational lensing observed by the Hubble Space Telescope. It's a phenomena that does not occur with Newtonian gravity, but absolutely must occur if General Relativity is correct.

It is one of those side-effects of the theory that can be used to disprove it if the side-effect does not occur. GR specifically demands this effect exist, because gravity is literally the bending of space-time, which affects the straight-line path of anything traveling across space-time. Light travels across space-time just as much as matter with mass, so its path must be affected by any curvature caused by a massive object. It's the same kind of test as bouncing a ping-pong ball straight up and down on a train going 90mph - if the ball falls off the table (as Aristotelian motion suggested) instead of bouncing in the same spot, Newton's laws of motion are worthless.

See Wikipedia for more on gravitational lensing.

This also happens to be how cosmologists expect to see the first direct observational evidence of black holes. My understanding is that there is not currently a radio-telescope large enough to discern such an effect yet, but one cosmologist is connecting radio-telescopes across the US to create a massive virtual telescope that would have the resolution required. Pretty cool stuff.

I'm not a physicists so I can't comment on the details of theory. The links that I provided claim that the original experiment that proved Einstein correct didn't in fact do so. If the claim is right it would put in question the modus operandi of the scientific community. That doesn't mean Einstein was wrong but maybe we should be a bit more distrustful of the scientific consensus that is presented as correct and proven.

You should be a bit more distrustful of perpetual motion advocacy websites.

That's disingenuous. Your original suggestion was clearly that Einstein was wrong, not just that one experiment was exaggerated. The Amazon book claims Einstein's entire theory is incorrect. The very quote you provide also says that "even today" Einstein's theories have not been tested with enough precision, and the conclusion of the quoted article seems to be that General Relativity is incorrect.

Thus, if you distrust anything, it should probably be the article and the book.

Regarding the experiment, the article says that scientists merely mistakenly thought the experiment was greater confirmation than it was, which has already been noted. Surely the fact that the scientists themselves eventually realized this, and designed many other experiments which *did* confirm General Relativity, is enough to give some measure of trust in the "modus operandi" of the scientific community *today*.

Scientists are only human, and make mistakes like anyone else. Conspiracy theories and distrust are unwarranted, I think.

The first thing I notice about those links is that they tell stories about *people*. Even more telling, they're stories about *celebrities*. This is an extremely good heuristic for identifying crackpots. It mostly talks about *who* proposed theories and how credible those people supposedly are, but this is not how science works and this is not what scientific papers sound like. Science is about the theories and experiments themselves; the information about who did them is relatively unimportant, and when scientists do mention people it's to acknowledge their contributions, and it takes up only a very small fraction of the text.

On the other hand, when a non-scientist tries to make sense of a field like physics without the necessary technical skills, politics-level arguments are all they have to go on, since they can't actually understand the subject at the object-level. They then imagine that everyone *else* is going on politics, too. Then, when every scientist they talk to tells them to take their politics-level arguments and get lost (because they're busy working at the object-level), they don't understand why, and imagine it's because of a conspiracy. Common pattern, easily recognizable, and anti-correlated with truth.

I am not sure what this post is supposed to communicate. People are generally far from being perfect Bayesian agents.

...you have to gather "evidence" to confirm it - to convince the scientific community, or justify saying that you believe in your hunch.

Are there arguments against this heuristic that are applicable to real world circumstances?

Einstein didn't come up with General Relativity that way. He didn't even do the hard math himself. He came up with some little truths (e.g. equivalence, speed of light is constant, covariance, must reduce to Newtonian gravity in unexceptional cases), from a handful of results that didn't seem to fit classical theory, and then he found a set of equations that fit.

Newtonian gravity provided heaps of data points and a handful of non-fits. Einstein bootstrapped on prior achievements like Newtonian gravity and special relativity and tweaked them to fit a handful of additional data points better. His confidence came from fitting 100% of the small available data set (something that wasn't clear in the case of the cosmological constant), however small it may have been. The minimum bit hypothesis assumes that all bits are created equal. But they aren't. Some bits advance the cause not at all, some bits advance it a great deal.

Similarly, the 27 bit rule for 100,000,000 people assumes that the bits have equal numbers of people who are yes and no on a question. In fact, some bits are more discriminating than others. "Have you ever been elected to an office that requires a statewide vote or been a Vice President?" (perhaps two bits of information), is going to eliminate 99.9999%+ of potential candidates for President, yet work nearly perfectly to dramatically narrow the field from the 100,000,000 eligible candidates. "Do you want to run for President?", cuts another 90%+ of potential candidates.

Einstein was confident because his bits had greater discriminatory power than other bits of information. There are only so many ways it is logically possible to fit the data he had.

Similarly, the 27 bit rule for 100,000,000 people assumes that the bits have equal numbers of people who are yes and no on a question. In fact, some bits are more discriminating than others. "Have you ever been elected to an office that requires a statewide vote or been a Vice President?" (perhaps two bits of information), is going to eliminate 99.9999%+ of potential candidates for President, yet work nearly perfectly to dramatically narrow the field from the 100,000,000 eligible candidates. "Do you want to run for President?", cuts another 90%+ of potential candidates.

It's not an assumption, it's a definition. Whatever is enough to cut your current set of candidates in half is "one bit"- the first bit will eliminate 50,000,000 people, the last bit will eliminate 1. An answer that reduces the set of candidates to .000001 times its original size contains 20 bits of information. (Notice that the *question* doesn't have bits of information associated with it, since each possible answer reduces the candidate set by a different amount- if they said "no," you acquired only a millionth of a bit of information.)

Maybe we should also consider that Einstein fully understood the irony in his statement, and was in a humorous mood. After all, what he would do if the attempt to verify did not succeed was not of any import whatever. It was a typical "sell newspaper" question.

It is humorous to see 'rationalists' analyzing a rationalist.

I guess a defense of old Albert would go something like this; the route he took to establish his theory didn't rely upon empirical evidence of the sort Eddington was trying to discover but rather was an elegant way to explain certain unusual properties of light and energy which, once he had formulated his theory, it seemed obvious to him could not be explained any other way. The kind of empirical validation which Eddington was carrying out was a laudable and necessary step in the process of theory confirmation/falstification but nevertheless it is entirely reasonable for Einstein to believe that no such confirmation was necessary as, relative to the theoretical status quo prior to the theory of relativity, the theory of relativity had vastly greater explanatory power and so any theory which might supplant it would have to incorporate elements of the theory, or postulates very similar to the theory, to explain the same things. Einstein had a sense of humour and was, I take it, simply relaying the idea that he thought it much more likely that any negative result from Eddington's expedition would turn out to be due to poor expedition data rather than a problem with the theory; I don't think he was every claiming (non-sensical) 100% certainty. The man may have been a genius, but he wasn't an idiot.

Hello, I am from Russia. I want to introduce interested persons with my article: «The Einstein's special theory of relativity is the greatest scam in the history of physic and alternative to it the concept of the Lorentz-Fitzgerald-Planck» The English version can be viewed on the Internet at the following address: http://www.akelevnm.narod.ru/aboutsto2.htm Article is discussed at the forum of the Moscow Engineering Physical Institute at the address: http://corum.mephist.ru/index.php?showtopic=20609&st=0 There you can to make comments or ask questions in English.

Akelev N.

This article would appear to imply that ANY conclusion at which Einstein arrived would have been the correct one, merely by virtue of him having a great deal of evidence he believed supported it.

Well, Einstein wouldn't arrive at just *any* conclusion.

Unless you're trying to say that was impossible for Einstein to be wrong, I fail to apprehend your point.

The probability distribution over conclusions about physics, representing the probability that a person would arrive at each conclusion, is not the same for Einstein as for an arbitrary person. And this difference is not arbitrary.

Dually, if I told you I had a conclusion about physics in an envelope, and asked what probability you would give that it was a true one, you might give a figure X. If I then told you the conclusion was made by Einstein and asked what probability you now gave that it was a true conclusion, I expect you would give a figure greater than X.

Once you assume:

1) the equations describing gravity are invariant under all coordinate transformations,

2) energy-momentum is not locally created or destroyed,

3) the equations describing gravity involve only the flow of energy-momentum and the curvature of the spacetime metric (and not powers or products or derivatives of these),

4) the equations reduce to ordinary Newtonian gravity in a suitable limit,

then Einstein's equations for general relativity are the only possible choice... except for one adjustable parameter, the cosmological constant.

(First Einstein said this constant was nonzero, then he said that was the "biggest mistake in his life", and then it turned out he was right in the first place. It's not zero, it's roughly 0.0000000000000000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000001. So, a bit of waffling on this issue is understandable.)

It took Einstein about 10 years of hard work to figure this out, with a lot of help from a mathematician Marcel Grossman who taught him the required math. But by the time he talked to that reporter he knew this stuff. That's what gave him his confidence.

His assumptions 1)-4) could have been wrong, of course. But he was playing a strong hand of cards - and he knew it.

By the way, he did write a paper where he got the equations wrong and predicted a *wrong* value for the deflection of starlight by the Earth's gravitational field. But luckily he caught his mistake before the experiment was done. If he'd caught his mistake afterwards, lots of people would have thought he was just retroactively fudging his theory to fit the data.

I can see why Einstein would assume 1), 2) and 4), but what was his motivation for assuming 3)? Just some intuition about simplicity?

Realize his theory was replacing ether theory. As people learned more, ether theory required increasingly arbitrary "patches" to work. If GR was not simpler then Ether theory, it wasn't a good candidate to replace it, as lorentzian transformations in ether theory still worked mathematically.