Particles break light-speed limit?

post by Kevin · 2011-09-23T11:00:36.866Z · LW · GW · Legacy · 175 comments

http://www.nature.com/news/2011/110922/full/news.2011.554.html

http://arxiv.org/abs/1109.4897v1

http://usersguidetotheuniverse.com/?p=2169

http://news.ycombinator.com/item?id=3027056

Ereditato says that he is confident enough in the new result to make it public. The researchers claim to have measured the 730-kilometre trip between CERN and its detector to within 20 centimetres. They can measure the time of the trip to within 10 nanoseconds, and they have seen the effect in more than 16,000 events measured over the past two years. Given all this, they believe the result has a significance of six-sigma — the physicists' way of saying it is certainly correct. The group will present their results tomorrow at CERN, and a preprint of their results will be posted on the physics website ArXiv.org.

At least one other experiment has seen a similar effect before, albeit with a much lower confidence level. In 2007, the Main Injector Neutrino Oscillation Search (MINOS) experiment in Minnesota saw neutrinos from the particle-physics facility Fermilab in Illinois arriving slightly ahead of schedule. At the time, the MINOS team downplayed the result, in part because there was too much uncertainty in the detector's exact position to be sure of its significance, says Jenny Thomas, a spokeswoman for the experiment. Thomas says that MINOS was already planning more accurate follow-up experiments before the latest OPERA result. "I'm hoping that we could get that going and make a measurement in a year or two," she says.


Perhaps the end of the era of the light cone and beginning of the era of the neutrino cone? I'd be curious to see your probability estimates for whether this theory pans out. Or other crackpot hypotheses to explain the results.

175 comments

Comments sorted by top scores.

comment by [deleted] · 2011-09-23T12:39:05.073Z · LW(p) · GW(p)

From an actual physicist:

Chang Kee Jung, a neutrino physicist at Stony Brook University in New York, says he'd wager that the result is the product of a systematic error. "I wouldn't bet my wife and kids because they'd get mad," he says. "But I'd bet my house."

Replies from: see
comment by see · 2011-09-24T07:43:52.890Z · LW(p) · GW(p)

Yes, but what would he want as the opposing wager? I'll gladly put up a cent (or, for that matter, $10,000,000,000,000 ZWR) against his house, while I wouldn't consider betting $10,000.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-09-27T10:09:49.816Z · LW(p) · GW(p)

I'll take bets at 99-to-1 odds against any information propagating faster than c. Note that this is not a bet for the results being methodologically flawed in any particular way, though I would indeed guess some simple flaw. It is just a bet that when the dust settles, it will not be possible to send signals at a superluminal velocity using whatever is going on - that there will be no propagation of any cause-and-effect relation at faster than lightspeed.

My real probability is lower, but I think that anyone who'd bet against me at 999-to-1 will probably also bet at 99-to-1, so 99-to-1 is all I'm offering.

I will not accept more than $20,000 total of such bets.

Replies from: Stuart_Armstrong, FAWS, ChrisHallquist, Kevin, MichaelHoward
comment by Stuart_Armstrong · 2011-10-04T19:20:09.008Z · LW(p) · GW(p)

I'll take that bet, for a single pound on my part against 99 from Eliezer.

(explanation: I have a 98-2 bet with my father against the superluminal information propagation being true, so this sets up a nice little arbitrage).

comment by FAWS · 2011-09-27T10:58:28.868Z · LW(p) · GW(p)

Is that c the speed of light in vacuum or c the constant in special relativity?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-09-27T20:36:51.446Z · LW(p) · GW(p)

c is the constant as it appears in fundamental physical equations, relativistic or quantum. Anything slowing down the propagation of photons through an apparent vacuum (such as interaction with dark matter) which did not affect, for example, the mass-energy equivalence of E=MC2, would not win the bet.

comment by ChrisHallquist · 2011-10-06T08:34:49.369Z · LW(p) · GW(p)

If Kevin doesn't go through with taking that bet for $202, I'll take it for $101.

comment by Kevin · 2011-09-29T12:21:12.293Z · LW(p) · GW(p)

I suggest clarifying the bet to say "information propagating faster than c as c is defined at the time of this bet". With that clarification, I can pay up front in cash for $202 as soon as possible.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-09-29T17:21:40.261Z · LW(p) · GW(p)

There are many definitions of c - it appears as a constant in many different physical equations. Right now, all of these definitions are consistent. If you have a new physics where all these definitions remain consistent and you can still transmit information faster than c, then certainly I have lost the bet. Other cases would be harder to settle - I did state that weird physics along the lines of "this is why photons are slowed down in a vacuum by dark matter, but neutrinos aren't slowed" wouldn't win the bet.

comment by MichaelHoward · 2011-09-28T01:42:18.951Z · LW(p) · GW(p)

Minerva remembered what Harry had told her... how people were usually too optimistic, even when they thought they were being pessimistic. It was the sort of information that preyed on your mind, dwelling in it and spinning off nightmares...

But what was the worst that could happen?

Replies from: pedanterrific, Eliezer_Yudkowsky
comment by pedanterrific · 2011-09-28T02:06:22.489Z · LW(p) · GW(p)

Actually, what is the worst that could happen? It's not [the structure of the universe is destabilized by the breakdown of causality], because that would have already happened if it were going to.

The obvious one would be [Eliezer loses $20,000], except that would only occur in the event that it were possible to violate causality, in which case he would presumably arrange to prevent his past self from making the bet in the first place, yeah? So really, it's a win-win.

Unless one of the people betting against him is doing so because ve received a mysterious parchment on which was written, in ver own hand, "MESS WITH TIME."

Replies from: JoshuaZ
comment by JoshuaZ · 2011-09-28T02:12:16.301Z · LW(p) · GW(p)

If there are ways to violate causality they are likely restrictive enough that we can't use them to violate causality prior to when we knew about the methods (roughly). This is true for most proposed causality violating mechanisms. For example, you might be able to violate causality with a wormhole, but you can't do it to any point in spacetime prior to the existence of the wormhole.

In general, if there are causality violating mechanisms we should expect that they can't violate causality so severely as to make the past become radically altered since we just don't see that. It is conceivable that such manipulation is possible but that once we find an effective method of violating causality we will be quickly wiped out (possibly by bad things related to the method itself) but this seems unlikely even assuming one already has a causality violating mechanism.

Replies from: wedrifid, pedanterrific
comment by wedrifid · 2011-09-28T02:24:37.935Z · LW(p) · GW(p)

Mostly agree. Would downgrade to "can't or won't". Apart from a little more completeness the difference makes a difference to anthropic considerations.

Replies from: pedanterrific, JoshuaZ
comment by pedanterrific · 2011-09-28T02:30:25.275Z · LW(p) · GW(p)

Does it even make sense to say "won't", or for that matter bring up anthropic considerations, in reference to causality violation?

This is a serious question, I don't know the answer.

Replies from: JoshuaZ, wedrifid
comment by JoshuaZ · 2011-09-28T02:42:24.966Z · LW(p) · GW(p)

Does it even make sense to say "won't", or for that matter bring up anthropic considerations, in reference to causality violation?

I'm not sure. If a universe allows sufficient causality violation then it may be that it will be too unstable for observers to arise in that universe. But I'm not sure about that. This may be causality chauvinism.

Replies from: pedanterrific
comment by pedanterrific · 2011-09-28T03:03:29.133Z · LW(p) · GW(p)

(I feel like there's a joke to be made here, something to do with "causality chauvinism", "causality violation", "too unstable for observers to arise", the relative "looseness" of time travel rules, maybe also the "Big Bang"... it's on the tip of my brain... nah, I got nothing.)

comment by wedrifid · 2011-09-28T02:48:51.380Z · LW(p) · GW(p)

Does it even make sense to say "won't" [...] in reference to causality violation?

Yes. (Leave out the anthropics, when that makes sense to bring up is complicated.)

Most of the reason for saying:

If there are ways to violate causality they are likely restrictive enough that we can't use them to violate causality prior to when we knew about the methods (roughly).

... are somewhat related to "causality doesn't appear to be violated". If (counterfactually) causality can be violated then it seems like it probably hasn't happened yet. This makes it a lot more likely that causality violations (like wormholes and magic) that are discovered in the future will not affect things before their discovery. This includes the set of (im)possible worlds in which prior-to-the-magic times cannot be interfered with and also some other (im)possible worlds in which it is possible but doesn't happen because it is hard.

An example would be faster-than-light neutrinos. It would be really damn hard to influence the past significantly with such neutrinos with nothing set up to catch them. It would be much easier to set up a machine to receive messages from the future.

It may be worth noting that "causality violation" does not imply "complete causality meltdown". The latter would definitely make "won't" rather useless.

Replies from: pedanterrific
comment by pedanterrific · 2011-09-28T03:50:43.941Z · LW(p) · GW(p)

... "causality doesn't appear to be violated"

Well, it's just... how could you tell? I mean, maybe the angel that told Colombo to sail west was a time-travelling hologram sent to avert the Tlaxcalan conquest of Europe.

An example would be faster-than-light neutrinos. It would be really damn hard to influence the past significantly with such neutrinos with nothing set up to catch them.

Well yes, I understand you probably couldn't use faster-than-light neutrinos from the future (FTLNFTFs) to effect changes in the year 1470 any more easily or precisely than, say, creating an equivalent neutrino burst to 10^10^9999 galaxies going supernova simultaneously one AU from Earth, presumably resulting in the planet melting or some such thing, I don't know.

However, elsewhere in this thread I suggested a method that takes advantage of a system that already exists and is set up to detect neutrinos (admittedly not FTLNFTFs specifically, though I don't know why that should matter). I still don't see exactly what prevents Eliezer_2831 from fiddling around with MINOS's or CERN's observations in a causality-violating but not-immediately-obvious manner.

Other than, you know, basic human decency.

Replies from: wedrifid
comment by wedrifid · 2011-09-28T08:19:11.608Z · LW(p) · GW(p)

Well, it's just... how could you tell? I mean, maybe the angel that told Colombo to sail west was a time-travelling hologram sent to avert the Tlaxcalan conquest of Europe.

We obviously can't with certainty. But we can say it is highly unlikely. The universe looks to us like it has a consistent causal foundation rather than being riddled with arbitrary causality violations. That doesn't make isolated interventions impossible, just unlikely.

I still don't see exactly what prevents Eliezer_2831 from fiddling around with MINOS's or CERN's observations in a causality-violating but not-immediately-obvious manner.

Overwhelming practical difficulties. To get over 800 years of time travel in one hop using neutrinos going very, very slightly faster than light the neutrinos would have to be shot from a long, long way away. Getting a long, long, way away takes time and is only useful if you are traveling close enough to the speed of light that on the return trip the neutrinos gain more time than what you spent travelling. Eliezer_2831 would end up on the other side of the universe somewhere and the energy required to shoot enough neutrinos to communicate over that much distance would be enormous. The scope puts me in mind of the Tenth Doctor: "And it takes a lot of power to send this projection— I'm in orbit around a supernova. [smiling weakly] I'm burning up a sun just to say goodbye."

I'm not sure if that scenario is more or less difficult than the remote neutrino manufacturing scenario. The engineering doesn't sound easy but once it is done once any time before heat death of the universe you just win. You can send anything back to (almost) any time.

Replies from: pedanterrific
comment by pedanterrific · 2011-09-28T17:31:35.947Z · LW(p) · GW(p)

The engineering doesn't sound easy but once it is done once any time before heat death of the universe you just win.

Unless you're fighting Photino Birds.

But that's pretty unlikely, yeah.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-04-20T07:19:19.125Z · LW(p) · GW(p)

That sounds like it's a reference to something awesome. Is it?

Replies from: pedanterrific
comment by JoshuaZ · 2011-09-28T02:41:31.236Z · LW(p) · GW(p)

In the context of almost every proposed causality violation mechanism I've seen seriously discussed, it really is can't, not won't. Wormholes aren't the only example. Tipler Cylinders for example don't allow time travel prior to the point when they started rotating. Godel's rotating universe has similar restrictions. Is there some time travel proposal I'm missing?

I agree that when considering anthropic issues won't becomes potentially relevant if we had any idea that time travel could potentially allow travel prior to the existence of the device in question. In that case, I'd actually argue in the other direction: if such machines could exist, I'd expect to see massive signs of such interference in the past.

Replies from: wedrifid
comment by wedrifid · 2011-09-28T02:56:02.502Z · LW(p) · GW(p)

In the context of almost every proposed causality violation mechanism I've seen seriously discussed, it really is can't, not won't.

There are plenty of mechanisms in which can't applies. There are others which don't have that limitation. I don't even want to touch what qualifies as 'seriously discussed'. I'm really not up to date with which kinds of time travel are high status.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-09-28T03:00:19.296Z · LW(p) · GW(p)

Ignore status issues. Instead focus on time travel mechanisms that don't violate SR. Are there any such mechanisms which allow such violation before the time travel device has been constructed? I'm not aware of any.

Replies from: MugaSofer
comment by MugaSofer · 2012-09-25T09:47:09.896Z · LW(p) · GW(p)

Alcubierre drives.

comment by pedanterrific · 2011-09-28T02:23:42.412Z · LW(p) · GW(p)

I'm pretty sure - not totally sure, I'm perfectly willing to be corrected by anyone with more knowledge of the physics than me, but still, pretty sure - that the stated objection would not preclude The Future from sending back time-travelling neutrinos to, say, the Main Injector Neutrino Oscillation Search in a pattern that spells out the Morse code for T-E-L-L--E-Y--D-N-M-W-T, possibly even in such a way that they wouldn't figure out the code until after CERN's results were published.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-09-28T02:46:20.695Z · LW(p) · GW(p)

This would be really difficult. The primary problem is that neutrinos don't interact with most things, so to send a signal you'd need to send a massive burst of neutrinos to the point where we should expect it to show up on other neutrino detectors also. The only plausible way this might work is if someone used a system at CERN, maybe the OPERA system itself in a highly improved and calibrated form to send the neutrinos back.

Although if neutrinos can go back in time then so much of physics may be wrong that this sort of speculation is likely to be extremely unlikely to be at all helpful. This is almost like going to an 17th century physicist and asking them to speculate what things would be like if nothing could travel faster than the speed of light.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-09-28T01:51:58.760Z · LW(p) · GW(p)

Yeah, see, I'm not betting against random cool new physics, I wouldn't offer odds like that on there not being a Higgs boson, I'm betting on the local structure of causality. Could I be wrong? Yes, but if I have to pay out that entire bet, it won't be the most interesting thing that happened to me that day.

How confident am I of this? Not just confident to offer to bet at 99-to-1 odds. Confident enough to say...

"Well, that was an easy, risk-free $202."

Or to put it even more plainly:

"You turned into a cat! A SMALL cat! You violated Conservation of Energy! That's not just an arbitrary rule, it's implied by the form of the quantum Hamiltonian! Rejecting it destroys unitarity and then you get FTL signaling! And cats are COMPLICATED! A human mind can't just visualize a whole cat's anatomy and, and all the cat biochemistry, and what about the neurology? How can you go on thinking using a cat-sized brain?"

McGonagall's lips were twitching harder now. "Magic."

"Magic isn't enough to do that! You'd have to be a god!"

Replies from: Kevin, Eugine_Nier
comment by Kevin · 2011-09-29T12:03:59.751Z · LW(p) · GW(p)

The consequence of the FTL neutrinos CERN thinks they found at six sigma significance is not the breakdown of causality. You can have faster than light neutrinos without backwards propagation of information. This is not the end of normality, but a new normality, one where Lorentz invariance is broken. This would mean that there is a universal reference class that trumps but doesn't destroy relativity. If anything, a universal reference class seems like a stronger causal structure than relativity.

This whole thing would be so normal, that there's a pre-existing effective field theory called the Standard Model Extension. http://en.wikipedia.org/wiki/Standard-Model_Extension

http://en.wikipedia.org/wiki/Lorentz_transformation

http://en.wikipedia.org/wiki/Lorentz_covariance

http://en.wikipedia.org/wiki/Lorentz-violating_neutrino_oscillations

is suggested WIkipedia skimming, http://blogs.discovermagazine.com/cosmicvariance/2005/10/25/lorentz-invariance-and-you/ is what gave me the intuition of the universal inertial frame.

I'm at around 10% odds on this whole thing seeming like weak consensus in 3 years and something like >80% odds (on a very very long bet) that locally possible FTL information travel is possible outside of the local structure of causality.

Replies from: Eliezer_Yudkowsky, JoshuaZ
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-09-29T17:53:27.039Z · LW(p) · GW(p)

It's not about transmitting information into the past - it's about the locality of causality. Consider Judea Pearl's classic graph with SEASONS at the top, SEASONS affecting RAIN and SPRINKLER, and RAIN and SPRINKLER both affecting the WETness of the sidewalk, which can then become SLIPPERY. The fundamental idea and definition of "causality" is that once you know RAIN and SPRINKLER, you can evaluate the probability that the sidewalk is WET without knowing anything about SEASONS - the universe of causal ancestors of WET is entirely screened off by knowing the immediate parents of WET, namely RAIN and SPRINKLER.

Right now, we have a physics where (if you don't believe in magical collapses) the amplitude at any point in quantum configuration space is causally determined by its immediate neighborhood of parental points, both spatially and in the quantum configuration space.

In other words, so long as I know the exact (quantum) state of the universe for 300 meters around a point, I can predict the exact (quantum) future of that point 1 microsecond into the future without knowing anything whatsoever about the rest of the universe. If I know the exact state for 3 meters around, I can predict the future of that point one nanosecond later. And so on to the continuous limit: the causal factors determining a point's infinitesimal future are screened off by knowing an infinitesimal spatial neighborhood of its ancestors.

This is the obvious analogue of Judea Pearl's Causality for continuous time; instead of discrete causal graphs, you have a continuous metric of relatedness (space) which shrinks to an infinitesimal neighborhood as you consider infinitesimal causal succession (time).

This, in turn, implies the existence of a fundamental constant describing how the neighborhood of causally related space shrinks as time diminishes, to preserve the locality of causal relatedness in a continuous physics.

This constant is, obviously, c.

I've never read this anywhere else, by the way. It clearly isn't universally understood, because if all physicists understood the universe in these terms, none of them would believe in a "collapse of the wavefunction", which is not locally related in the configuration space. I would be surprised neither to find that the above statement is original, nor that it has been said before.

I am attempting to bet that physics still looks like this after the dust settles. It's a stronger condition than global noncircularity of time - not all models with globally noncircular time have local causality.

If violating Lorentz invariance means that physics no longer looks like this, then I will bet at 99-to-1 odds against violations of Lorentz invariance. But I can't make out from the Wikipedia pages whether Lorentz violations mean the end of local causality (which I'll bet against) or if they're random weird physics (which I won't bet against).

I am also willing to bet that the fundamental constant c as it appears in multiple physical equations is the constant of time/space locality, i.e., the constant we know as c is fundamentally the shrinking constant by which an infinitesimal neighborhood in space causally determines an infinitesimal future in time. I am willing to lose the bet if there's still locality but the real size of the infinitesimal spatial neighborhood goes as 2c rather than c (though I'm not actually sure whether that statement is even meaningful in a Lorentz-invariant universe) and therefore you can use neutrinos to transmit information at up to twice the speed of light, but no faster. The clues saying that c is the fundamental constant that we should expect to see in any continuous analogue of a locally causal universe, are strong enough that I'll bet on them at 99-to-1 odds.

What I can't make out is whether Lorentz violation throws away locality; employs a more complicated definition of c which is different in some directions than others; makes the effect of the constant different on neutrinos and photons; or, well, what exactly.

I would happily amend the bet to be annulled in the case that any more complicated definition of c is adopted by which there is still a constant of time/space locality in causal propagation, but it makes photons and neutrinos move at different speeds.

The trouble is that physicists don't read books like Causality and don't understand local causality as part of the apparent character of physical law, which is why some of them still believe in the "collapse of the wavefunction" - it would be an exceptional physicist whom we could simply ask whether the Standard Model Extension preserves locally continuous causality with c as the neighborhood-size constant.

Replies from: Eugine_Nier, JoshuaZ, Username, JoshuaZ
comment by Eugine_Nier · 2011-09-30T05:09:26.406Z · LW(p) · GW(p)

This is starting to remind me of Kant. Specifically is attempt to provide an a priori justification for the then known laws of physics. This made him look incredibly silly once relativity and quantum mechanics came along.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-09-30T07:58:44.819Z · LW(p) · GW(p)

And Einstein was better at the same sort of philosophy and used it to predict new physical laws that he thought should have the right sort of style (though I'm not trying to do that, just read off the style of the existing model). But anyway, I'd pay $20,000 to find out I'm that wrong - what I want to eliminate is the possibility of paying $20,000 to find out I'm right.

comment by JoshuaZ · 2011-09-30T03:09:06.546Z · LW(p) · GW(p)

You need to distinguish different notions of local causality. SR implies in most forms a very strong form of local causality that you seem to be using here. But it is important to note that very well behaved systems can not obey this, and it isn't just weird systems. For example, a purely Newtonian universe won't obey this sort of strong local causality. A particle from far away can have arbitrarily high velocity and smack into the region we care about. The fact that such well behaved systems are ok with weaker forms of local causality suggests that we shouldn't assign such importance to local causality.

What I can't make out is whether Lorentz violation throws away locality; employs a more complicated definition of c which is different in some directions than others; makes the effect of the constant different on neutrinos and photons; or, well, what exactly

This isn't a well-defined question. It depends very much on what sort of Lorentz violation you are talking about. Imagine that you are working in a Newtonian framework and someone asks "well, if gravity doesn't always decrease at a 1/r^2 rate, will the three body problem still be hard?" The problem is that the set of systems which violate Lorentz is so large that saying this isn't that helpful.

The trouble is that physicists don't read books like Causality and don't understand local causality as part of the apparent character of physical law,

The vast majority of physicists aren't thinking about how to do things that replace the fundamental laws with other fundamental more unifying laws. The everday work of physicists is stuff like trying to measure the rest mass of elementary particles more precisely, or being better able to predict the properties of pure water near a transition state, or trying to better model the behavior of high temperature superconductors. They don't have reason to think about these issues. But even if they did, they probably wouldn't take these sorts of ideas as seriously as you do. Among other problems, strong local causality is something which appeals to a set of intuitions. And humans are notoriously bad at intuiting how the universe behaves. We evolved to get mates and avoid tigers, not to be able to intuit the details of the causal structure of reality.

comment by Username · 2012-04-20T05:09:09.687Z · LW(p) · GW(p)

It clearly isn't universally understood, because if all physicists understood the universe in these terms, none of them would believe in a "collapse of the wavefunction", which is not locally related in the configuration space.

And just like that, Many-Worlds clicked for me. It's now incredibly obvious just how preposterous waveform collapse is, and this new intuitive mental model clears up a lot of the frustrating sticking points I was having with QM. C as the speed limit of information in the universe and the notion of local causality have all been a native part of my view of the universe for a while, but it wasn't until that sentence that I connected them to decoherence.

Edit: Wow, a lot more things just clicked, including quantum suicide. My priority of cyronics just shot up several orders of magnitude, and I'm going to sign up once I've graduated and start bringing in income.
Eliezer, if you have never seen The Prestige, I recommend you go and watch it. It provides a nice allegory for MW/quantum suicide that I think a lot of lay-people will be able to connect to easily. Could help when you're explaining things.

Edit2: Just read your cyronics 101, and while the RIGHT NOW message punctured through my akrasia, I looked it up and even the $310/yr is not affordable right now. However, it's far more affordable than I had thought and in a couple months I should be in a position where this becomes sustainably possible.

By the way, thank you. You probably know this on an intuitive level, but it should be good to hear that your work may very well be saving lives.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-04-20T05:49:55.324Z · LW(p) · GW(p)

Username, you're having a small conversion experience here, going from "causality is local" to "wavefunction collapse is preposterous" to "I understand quantum suicide" to "I'd better sign up for cryonics once I graduate" in rapid succession. It's a shame we can't freeze you right now, and then do a trace-and-debug of your recent thoughts, as a case study.

This was a somewhat muddled comment from Eliezer. Local causality does not imply an upper speed limit on how fast causal influences can propagate. Then he equivocates between locality within a configuration and locality within configuration space. Then he says that if only everyone in physics thought like this, they wouldn't have wrong opinions about how QM works. I can only guess how you personally relate all that to decoherence. And from there, you get to increased confidence in cryonics. It could only happen on Less Wrong. :-)

ETA: Some more remarks:

Locality does not imply a maximum speed. Locality just means that causes don't jump across space to their effects, they have to cross it point by point. But that says nothing about how fast they cross it. You could have a nonrelativistic local quantum mechanics with no upper speed limit. Eliezer is conflating locality with relativistic locality, which is what he is trying to derive from the assumption of locality. (I concede that no speed limit implies a de-facto or practical nonlocality, in that the whole universe would then be potentially relevant for what happens here in the "next moment"; some influence moving at a googol light-years per second might come crashing in upon us.)

Equivocating between locality in a configuration and locality in a configuration space: A configuration is, let's say, an arrangement of particles in space. Locality in that context is defined by distance in space. But configuration space is a space in which the "points" themselves are whole configurations. "Locality" here refers to similarity between whole configurations. It means that the amplitude for a whole configuration is only immediately influenced by the amplitudes for infinitesimally different whole configurations.

Suppose we're talking about a configuration in which there are two atoms, A and B, separated by a light-year. The amplitude for that configuration (in an evolving wavefunction) will be affected by the amplitude for a configuration which differs slightly at atom A, and also by the amplitude for a configuration which differs slightly at atom B, a light-year away from A. This is where the indirect nonlocality of QM comes from - if you think of QM in terms of amplitude flows in configuration space: you are attaching single amplitudes to extended objects - arbitrarily large configurations - and amplitude changes can come from very different "directions" in configuration space.

Eliezer also talks about amplitudes for subconfigurations. He wants to be able to say that what happens at a point only depends on its immediate environment. But if you want to talk like this, you have to retreat from talking about specific configurations, and instead talk about regions of space, and the quantum state of a "region of space", which will associate an amplitude with every possible subconfiguration confined to that region.

This is an important consideration for MWI, evaluated from a relativistic perspective, because relativity implies that a "configuration" is not a fundamental element of reality. A configuration is based on a particular slicing of space-time into equal-time hypersurfaces, and in relativity, no such slicing is to be preferred as ontologically superior to all others. Ultimately that means that only space-time points, and the relations between them (spacelike, lightlike, timelike) are absolute; assembling sets of points into spacelike hypersurfaces is picking a particular reference frame.

This causes considerable problems if you want to reify quantum wavefunctions - treat them as reality, rather than as constructs akin to probability distributions - because (for any region of space bigger than a point) they are always based on a particular hypersurface, and therefore a particular notion of simultaneity; so to reify the wavefunction is to say that the reference frame in which it is defined is ontologically preferred. So then you could say, all right, we'll just talk about wavefunctions based at a point. But building up an extended wavefunction from just local information is not a simple matter. The extended wavefunction will contain entanglement but the local information says nothing about entanglement. So the entanglement has to come from how you "combine" the wavefunctions based at points. Potentially, for any n points that are spacelike with respect to each other, there will need to be "entanglement information" on how to assemble them as part of a wavefunction for configurations.

I don't know where that line of thought takes you. But in ordinary Copenhagen QM, applied to QFT, this just doesn't even come up, because you treat space-time, and particular events in space-time, as the reality, and wavefunctions, superpositions, sums over histories, etc, as just a method of obtaining probabilities about reality. Copenhagen is unsatisfactory as an ontological picture because it glosses over the question of why QM works and of what happens in between one "definite event" and the next. But the attempt to go to the opposite interpretive pole, and say "OK, the wavefunction IS reality" is not a simple answer to your philosophical problems either; instead, it's the beginning of a whole new set of problems, including, how do you reify wavefunctions without running foul of relativity?

Returning to Eliezer's argument, which purports to derive the existence of a causal speed-limit from a postulate of "locality": my critique is as informal and inexact as his argument, but perhaps I've at least shown that this is not as simple a matter as it may appear to the uninformed reader. There are formidable conceptual problems involved just in getting started with such an argument. Eliezer has the essentials needed to think about these topics rigorously, but he's passing over crucial details, and he may thereby be overlooking a hole in his intuitions. In mathematics, you may start out with a reasonable belief that certain objects always behave in a certain way, but then when you examine specifics, you discover a class of cases which work in a way you didn't anticipate.

What if you have a field theory with no speed limit, but in which significant and ultra-fast-moving influences are very rare; so that you have an effective "locality" (in Eliezer's sense), with a long tail of very rare disruptions? Would Eliezer consider that a disproof of his intuitive idea, or an exception which didn't sully the correctness of the individual insight? I have no idea. But I can say that the literature of physics is full of bogus derivations of special relativity, the Born rule, the three-dimensionality of space, etc. This derivation of "c" from Pearlian causal locality certainly has the ingredients necessary for such a bogus derivation. The way to make it non-bogus is to make it deductively valid, rather than just intuitive. This means that you have to identify and spell out all the assumptions required for the deduction.

Replies from: Username
comment by Username · 2012-04-20T06:58:14.060Z · LW(p) · GW(p)

This may or may not be the result of day 2 of modafinil. :) I don't think it is, because I already had most of the pieces in place, it just took that sentence to make everything fit together. But that is a data point.

Hm, a trace-debug. My thought process over the five minutes that this took place was manipulation of mental imagery of my models of the universe. I'm not going to be able to explain much clearer than that, unfortunately. It was all very intuitive and not at all rigorous, the closest representation I can think of is Feynman's thinking about balls. I'm going to have to do a lot more reading as my QM is very shakey, and I want to shore this up. It will also probably take a while until this way of thinking becomes the natural way I see the universe. But it all lines up, makes sense, and aligns with what people smarter than me are saying, so I'm assigning a high probability that it's the correct conclusion.

An upper speed limit doesn't matter - all that matters is that things are not instantaneous for locality to be valid.

A conversion experience is a very appropriate term for what I'm going through. I'm having very mixed emotions right now. A lot of my thoughts just clarified, which simply feels good. I'm grateful, because I live in an era where this is possible and because I was born intelligent enough to understand. Sad, because I know that most if not all of the people I know will never understand, and never sign up for cyronics. But I'm also ecstatic, because I've just discovered the cheat code to the universe, and it works.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-04-20T08:57:55.748Z · LW(p) · GW(p)

I just made a long-winded addition to my comment, expanding on some of the gaps in Eliezer's reasoning.

I'm also ecstatic, because I've just discovered the cheat code to the universe, and it works.

Well, you're certainly not backing down and saying, hang on, is this just an illusory high? It almost seems inappropriate to dump cold water on you precisely when you're having your satori - though it's interesting from an experimental perspective. I've never had the opportunity to meddle with someone who thinks they are receiving enlightenment, right at the moment when it's happening; unless I count myself.

From my perspective, QM is far more likely to be derived from 't Hooft's holographic determinism, and the idea of personal identity as a fungible pattern is just (in historical terms) a fad resulting from the incomplete state of our science, so I certainly regard your excitement as based mostly on an illusion. It's good that you're having exciting ideas and new thoughts, and perhaps it's even appropriate to will yourself to believe them, because that's a way of testing them against the whole of the rest of your experience.

But I still find it interesting how it is that people come to think that they know something new, when they don't actually know it. How much does the thrill of finally knowing the truth provide an incentive to believe that the ideas currently before you are indeed the truth, rather than just an interesting possibility?

Replies from: Username
comment by Username · 2012-04-20T10:29:49.448Z · LW(p) · GW(p)

From experiences back when I was young and religious, I've learned to recognize moments of satori as not much more than a high (have probably had 2-3 prior). I enjoy the experience, but I've learned skepticism and try not to place too much weight on them. I was more describing the causes for my emotional states rather than proclaiming new beliefs. But to be completely honest, for several minutes I was convinced that I had found the tree of life, so I won't completely downplay what I wrote.

How much does the thrill of finally knowing the truth provide an incentive to believe that the ideas currently before you are indeed the truth, rather than just an interesting possibility?

I suspect it has evopsych roots relating to confidence, the measured benefits of a life with purpose, and good-enough knowledge.

Reading 't Hooft's paper I could understand what he was saying, but I'm realizing that the physics is out of my current depth. And I understand the argument you explained about the flaws in spatial (as opposed to configuration) locality. I'll update my statement that 'Many-Worlds is intuitively correct' to 'Copenhagen is intuitively wrong,' which I suppose is where my original logic should have taken me - I just didn't consider strong MWI alternatives. Determinism kills quantum suicide, so I'll have to move down the priority of cyronics (though the 'if MWI then quantum suicide then cyronics' logic still holds and I still think cyronics is a good idea. I do love me a good hedge bet). But like I said, I'm not at all qualified to start assigning likelyhoods here between different QM origins. This requires more study.

I don't see the issue with consciousness as being represented by the pattern of our brains rather than the physicality of it. You are right that we may eventually find that we can never look at a brain with high enough resolution to emulate it. But based on cases of people entering a several-hour freeze before being revived, the consciousness mechanism is obviously robust and I say this points towards it being an engineering problem of getting everything correct enough. The viability of putting it on a computer once you have a high enough resolution scan is not an issue - worst case scenario you start from something like QM and work up. Again this assumes a level of the brain's robustness (rounding errors shouldn't crash the mind), but I would call that experimentally proven in today's humans.

comment by JoshuaZ · 2011-09-30T15:00:09.664Z · LW(p) · GW(p)

Note also that some of the recent papers do explicitly discuss causality issues. See e.g. this one.

comment by JoshuaZ · 2011-09-29T12:34:15.936Z · LW(p) · GW(p)

Hmm, would you be willing to bet on either the 10% claim or the 80% claim?

Everything you have said until the last paragraph seems reasonable to me, and then those extremely high probabilities jump out.

comment by Eugine_Nier · 2011-09-28T03:11:25.635Z · LW(p) · GW(p)

I'm betting on the local structure of causality.

Not necessarily, there could be a distinguished frame of reference.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-09-28T18:09:16.691Z · LW(p) · GW(p)

That might preserve before-and-after. It wouldn't preserve the locality of causality. Once you throw away c, you might need to take the entire frame of the universe into account when calculating the temporal successor at any given point, rather than just the immediate spatial neighborhood.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-04-20T07:27:54.932Z · LW(p) · GW(p)

There could be some other special velocity than c. Like, imagine there's some special reference frame in which you can send superluminal signals at exactly 2.71828 c in any direction. In other reference frames, this special velocity depends on which direction you send the signal. Lorentz invariance is broken. But the only implication for local causality is that you need to make your bubble 2.71828 times bigger.

comment by Jack · 2011-09-23T13:32:09.975Z · LW(p) · GW(p)

People in this thread with physics backgrounds should say so so that I can update in your direction.

comment by Hyena · 2011-09-23T14:31:06.823Z · LW(p) · GW(p)

When I looked at the paper, my impression is that it was a persistent result in the experiment, which would explain publication: the experiment's results will be public and someone, eventually, will notice this in the data. Better that CERN officially notice this in the data than Random High Energy Physicist. People relying on CERN's move to publish may want to update to account for this fact.

Replies from: Jack, Mercurial
comment by Jack · 2011-09-23T14:46:27.953Z · LW(p) · GW(p)

This is a really good point.

comment by Mercurial · 2011-09-23T16:30:55.072Z · LW(p) · GW(p)

Forgive me for being a bit slow, but I honestly don't understand what you mean. I don't know why their publishing the results needs explanation; they already said it was because they couldn't find an error and are hoping that someone else will find one if it's there. Is your point that the fact that CERN published this rather than someone else is to be taken as evidence of its veracity? Or do you mean something else?

Replies from: Jack, PhilGoetz, Hyena
comment by Jack · 2011-09-23T17:38:14.064Z · LW(p) · GW(p)

Lets say you're a physicist maximizing utility. It's pretty embarrassing to publish results with mistakes in them and the more important the results the more embarrassing it would be to announce results later shown to be the product of some kind of incompetence. So one can usually expect published results of serious import to have been checked over and over for errors.

But the calculus changes when we introduce the incentive of discovering something before anyone else. This is particularly the case when the discovery is likely to lead to a Nobel prize. In this case a physicist might be less diligent about checking the work in order to make sure she is the first out with the new results.

Now in this case CERN-OPERA is pretty much the only game in town. No one else can measure this many neutrinos with this kind of accuracy. So it would seem like they could take all the time they needed to check all the possible sources of error. But if Hyena is right that OPERA's data is/was shortly going to be public then they risk someone outside CERN-OPERA noticing the deviation from expected delay and publishing the results. By itself that is pretty embarrassing and it introduces some controversy regarding who deserves credit for the discovery.

Now after watching the presentation I get the sense that they really did check everything they could think of and it sounds like they took about six months to prepare analysis. It also sounds like all the relevant calibration issues are just too tricky and complex for anyone outside the CERN-OPERA to be the first to publish without risking embarrassment. Nor do I know for sure what kind of access to the results was available to outside physicists. So I think the alleged effect was at most minimal. But to update on publication requires a good model of the incentives the physicists faced.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-09-23T20:02:11.176Z · LW(p) · GW(p)

Now in this case CERN-OPERA is pretty much the only game in town. No one else can measure this many neutrons with this kind of accuracy

Neutrinos not neutrons (very different particles. Neutrons are much better understood and easier to work with.)

There's work in the US at Fermilab which could reasonably measure things at this level of accuracy. I don't know much about the Japanese work by stuff related to SK might be able to do similar things. Other than those issues your analysis seems accurate. None of these points detract from the general thrust of your argument.

Replies from: Jack
comment by Jack · 2011-09-23T20:06:23.397Z · LW(p) · GW(p)

Edited- Neutrinos, obviously. Brain fart.

I think Fermilab-MINOS can measure such things but I believe I read they have to update and recalibrate a bunch of things to get more accuracy, first. (Recall MINOS already saw this same effect but not at a statistically significant level. Obviously, they now have an incentive to improve their accuracy.)

comment by PhilGoetz · 2011-09-23T16:56:53.486Z · LW(p) · GW(p)

I think Hyena means they had a reason to publish other than believing the result is correct.

Replies from: Hyena
comment by Hyena · 2011-09-23T18:14:20.540Z · LW(p) · GW(p)

Correct.

comment by Hyena · 2011-09-23T18:56:18.196Z · LW(p) · GW(p)

My point is that CERN's publication of the anomaly is implied by its existence and an assumption that CERN minimally competent to run a high-level research project. Therefore, the publication itself gives us no information we did not already have. (The paper itself doesn't even really give us anything important by noting the anomaly, either, since our beliefs are about the implications of the anomaly, so its existence in itself can't be part of the calculation.)

Replies from: Mercurial
comment by Mercurial · 2011-09-23T21:53:00.298Z · LW(p) · GW(p)

Ah. Thank you for clarifying!

comment by Desrtopa · 2011-09-23T15:27:42.104Z · LW(p) · GW(p)

P= .95 the reporting will be much sparser when the results are overturned.

comment by malthrin · 2011-09-23T14:11:54.263Z · LW(p) · GW(p)

Relevant: The Beauty of Settled Science

I'm waiting for another experiment before I get too worked up about this result.

comment by Jack · 2011-09-23T12:57:58.382Z · LW(p) · GW(p)

That MINOS saw something like this before is pretty interesting. Other thing to consider is SN1987A-- at the rate the CERN neutrinos were traveling we should have detected neutrinos of SN1987A four years before it was visible.

The fact that this was made public like this suggests they are very confident they haven't made any obvious errors.

This paper discusses the possibility of neutrino time travel.

There is a press conference at 10 AM EST.

I'll say 0.9 non-trivial experimental set-up error (no new physics but nothing silly either). 0.005 something incompetent or fraudulent. Remainder is new physics "something I don't know about, "neutrinos sometimes travel backwards in time" and "special relativity is wrong" 8000:800:1.

comment by wedrifid · 2011-09-23T13:26:55.454Z · LW(p) · GW(p)

Perhaps the end of the era of the light cone and beginning of the era of the neutrino cone?

Does that work? Once you beat light don't you just win the speed race? The in-principle upper bound on what can be influenced just disappears. The rest is just engineering. Trivial little details of how to manufacture a device that emits a finely controlled output of neutrinos purely by shooting other neutrinos at something.

Replies from: gwern, Baughn
comment by gwern · 2011-09-23T14:06:53.636Z · LW(p) · GW(p)

I think so; with any noticeable faster than C, can't you just ping-pong between paired receiver/emitters, gaining a little distance into the past with each ping-pong? (If you're only gaining a few nanoseconds with each cycle, might be expensive in equipment or energy, but for the right information from days/weeks in the future - like natural disaster warnings - it'd worth it, even ignoring hypercomputation issues.)

Replies from: SilasBarta, MartinB
comment by SilasBarta · 2011-09-24T00:35:27.630Z · LW(p) · GW(p)

"Yeah, I only go a little into the past each time, but I make it up in volume!"

Replies from: Solvent, gwern
comment by Solvent · 2011-09-29T10:38:49.302Z · LW(p) · GW(p)

What's that a quote from? I'd just Google, but you changed a word or two, I think.

Replies from: SilasBarta, arundelo
comment by SilasBarta · 2011-09-29T16:23:12.886Z · LW(p) · GW(p)

I just made it up, trying to be silly. It's just an application of the standard "low margin, make it up on volume". It barely even makes sense as a joke, since the idea is actually sound (or at least not unsound on its face). If you can go any amount into the past, then you could, it seems, stack the process so that you go as far as you want into the past.

comment by arundelo · 2011-09-29T15:10:04.358Z · LW(p) · GW(p)

I doubt Silas was thinking of this, but it reminded me of SNL's "First Citiwide Change Bank" commercial.

comment by gwern · 2011-09-24T01:24:48.209Z · LW(p) · GW(p)

That's what she said.

comment by MartinB · 2011-10-06T13:55:12.888Z · LW(p) · GW(p)

Go Mr. Parker!

Replies from: gwern
comment by gwern · 2011-10-06T14:59:36.388Z · LW(p) · GW(p)

I'll be honest, reading that link, that show sounds terrible.

Replies from: MartinB
comment by MartinB · 2011-10-06T18:47:19.777Z · LW(p) · GW(p)

I like it. They used difficult and expensive time travel to undo major catastrophes.

comment by Baughn · 2011-09-24T15:17:40.663Z · LW(p) · GW(p)

Well, I'd say there's a significant chance you'd end up with a boom instead, for invoking the (quantum) chronology protection conjecture.

That wouldn't necessarily stop you in all cases, though. It just means you need quantum computer-level isolation, or a signal that doesn't include any actual closed timelike curves - that is, you could hypothetically send a signal from 2011 Earth to 2006 Alpha Centauri so long as the response takes five years to get back.

Replies from: JoshuaZ, wedrifid
comment by JoshuaZ · 2011-09-24T15:59:58.513Z · LW(p) · GW(p)

Hmm, I don't think most variants of chronology protection imply inherently destructive results. But your remark made me feel all of a sudden very worried that if is real this could be connected to the Great Filter. I'm almost certainly assigning this more emotional weight than the very tiny probability that is at all justified.

Replies from: wedrifid, Baughn, pedanterrific
comment by wedrifid · 2011-09-24T22:38:47.055Z · LW(p) · GW(p)

I'm almost certainly assigning this more emotional weight than the very tiny probability that is at all justified.

I don't know about you but the emotion I associate with the possibility is fascination, curiosity and some feeling that we need a word for along the lines of entertainment-satisfaction. It's just so far out into far mode that it doesn't associate with visceral fear. And given the low probability it is one instance of disconnection of emotion to knowledge of threat that doesn't seem like a problem! :)

comment by Baughn · 2011-09-24T16:26:52.988Z · LW(p) · GW(p)

Don't worry, I'm pretty sure it'd be a tiny boom. ;)

No free energy, after all.

Replies from: James_Miller
comment by James_Miller · 2011-09-24T19:45:27.974Z · LW(p) · GW(p)

No free energy, after all.

How does this relate to free energy?

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2011-09-24T21:53:45.758Z · LW(p) · GW(p)

If there was an explosion big enough to cause worldwide destruction, where would the energy come from?

comment by pedanterrific · 2011-09-24T16:19:59.752Z · LW(p) · GW(p)

What, as in "You fools, you've doomed us all!"?

comment by wedrifid · 2011-09-24T15:29:51.424Z · LW(p) · GW(p)

Hey, I'm not the one who broke physics. Take it up with CERN! ;)

comment by gwern · 2011-09-23T14:08:15.562Z · LW(p) · GW(p)

"Recent CERN reports of faster than light neutrinos will be found to be mistaken within 3 months", PB.com.

Replies from: DanielLC
comment by DanielLC · 2011-09-24T01:28:53.697Z · LW(p) · GW(p)

The problem that most of those people are probably guessing as to when it will be found to be mistaken.

Replies from: gwern
comment by gwern · 2011-09-24T01:53:59.548Z · LW(p) · GW(p)

Any finding that it is mistaken will have a 'when' attached, I think...

comment by wedrifid · 2011-09-23T13:19:24.881Z · LW(p) · GW(p)

Particles break light-speed limit?

My grandfather is doomed, doomed I say!

Mwahahaha!

Replies from: loqi
comment by loqi · 2011-09-23T22:14:14.609Z · LW(p) · GW(p)

And what, if I may ask, are your plans for your grandmother?

Replies from: None
comment by [deleted] · 2011-09-24T01:11:16.100Z · LW(p) · GW(p)

It's gonna be Lazarus Long all over again -_-;

Replies from: pedanterrific
comment by pedanterrific · 2011-09-24T01:44:29.368Z · LW(p) · GW(p)

Aha! I knew wedrifid was my worst enemy!

comment by JoshuaZ · 2011-09-23T12:53:31.334Z · LW(p) · GW(p)

I strongly suspect that this is due to human error (say 95%). A few people in this thread are batting around much higher probability but given that this isn't a bunch of crackpots but are researchers at CERN this seems like overconfidence. (1-10^-8 is really, really confident.) The strongest evidence that this is an error is that it isn't being produced at much faster than the speed of light but only a tiny bit over.

I'm going to now proceed to list some of the 5%. I don't know enough to discuss their likelyhood in detail.

1) Neutrinos oscillating into a tachyonic form. This seems extremely unlikely. I'm not completely sure, but I think this would violate CPT among other things.

2) Neutrinos oscillating into a sterile neutrino that is able to travel along another dimension. We can approximately bound the number of neutrino types by around 6 (this extends from the SN 1987A data and solar neutrino data).

Both 1 and 2 require extremely weird situations where neutrinos have a probability of oscillating into a specific form with an extremely low probability but have a high probability of oscillating away from it. (If the probability to go to this form were high we would have seen it in the solar neutrino deficiency.) These both have the nice part of potentially explaining dark matter also.

3) Photons have mass, and we need to distinguish between the speed of light and c in SR. The actual value of c in SR is slightly higher than what photons generally travel at, so high energy very low mass particles can travel faster than the speed of light but not faster than c. This runs into a lot of problems, such as the fact that a lot of SR can be derived from Maxwell's equations and some reasonable assumptions about conservation, symmetry and reference frames. So the speed of light should be the actual value showing up in SR.

One other thing to note that hasn't gotten a lot of press- if neutrinos regularly do this we should have seen the SN 1987A neutrinos years before the light arrived, rather than just a few hours before. This is evidence against. But this is only weak evidence since the early neutrino detectors were weak enough that this sort of thing could have been conceivably missed. Moreover, the Mont Blanc detector did detect a burst of neutrinos a few hours before SN 1987A before the main burst. This is generally considered to be a statistical fluke. But, nother detectors could potentially have been neutrinos traveling faster than the speed of light. Problem with this: Why would none of the other detectors have also gotten that early burst? Second problem: If this were the case the early SN 1987A neutrinos might be still traveling faster than light but it would be much much slower than the claim here. This claim amounts to neutrinos traveling on the order of 1/10,000 to 1/40,000 of c faster than they should. The Mont Blanc thing would require them traveling faster on the order of a (10^-9)c faster than they should.

Replies from: prase, Jack
comment by prase · 2011-09-23T13:33:30.148Z · LW(p) · GW(p)

The main problem with 3) is that if photons have mass, then we would observe differences in speed of light depending on energy at least as big as the difference measured now for neutrinos. This seems not to be the case and c is measured with very high accuracy. If photons traveled with some velocity lower than c, but constant independent of energy, that would violate special relativity.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-09-23T13:43:11.032Z · LW(p) · GW(p)

Yes, but we almost always measure c precisely using light near the visible spectrum. Rough estimates were first made based on the behavior of Jupiter and Saturn's moons (their eclipses occurred slightly too soon when the planets were near Earth and slightly too late when they were far from Earth).

Variants of a Foucault apparatus are still used and that's almost completely with visible light or near visible light.

One can also use microwaves to do clever stuff with cavity resonance. I'm not sure if there would be a noticeable energy difference.

The ideal thing would be to measure the speed of light for higher energy forms of light, like x-rays and gamma rays. But I'm not aware of any experiments that do that.

Replies from: prase, wedrifid
comment by prase · 2011-09-23T17:22:32.464Z · LW(p) · GW(p)

The experimental upper bound on photon mass is 10^-18 eV. The photons near visible spectrum have about 10^-3 eV, which means their relative deviation from c is of order 10^-30. Gamma would be even closer. I don't think mass of photon is measurable via speed of light.

comment by wedrifid · 2011-09-23T13:46:20.372Z · LW(p) · GW(p)

The idea thing would be to measure the speed of light for higher energy forms of light, like x-rays and gamma rays. But I'm not aware of any experiments that do that.

Err... build a broad spectrum telescope and look at an unstable stellar entity?

Replies from: JoshuaZ
comment by JoshuaZ · 2011-09-23T13:50:15.974Z · LW(p) · GW(p)

That's an interesting idea. But the method one detects gamma rays or x-rays is very different than what one uses to detect light, so calibrating would be tough. And most unstable events take place over time, so this would be really tough. Look at for example a supernova- even the neutrino burst lasts on the order of tens of seconds. Telling whether the gamma rays arrived at just the right time or not would seem to be really tough. I'm not sure, would need to crunch the numbers. It certainly is an interesting idea.

Hmm, what about actively racing them? Same method as yours but closer in. Set off a fusion bomb (which we understand really well) far away (say around 30 or 40 AU out). That will be on the order of a few light hours which might be enough to see a difference if one knew then that everything had to start at the exact same time.

Replies from: wedrifid
comment by wedrifid · 2011-09-23T13:59:01.100Z · LW(p) · GW(p)

Telling whether the gamma rays arrived at just the right time or not would seem to be really tough. I'm not sure, would need to crunch the numbers.

Short answer: The numbers come out in the ballpark of hours not seconds.

Hmm, what about actively racing them? Same method as yours but closer in.

Being closer in relies on trusting your engineering competence to be able to calibrate your devices well. Do it based off interstellar events and you just need to go "Ok, this telescope went bleep at least a few minutes before that one" then start scribbling down math. I never trust my engineering over my physics.

comment by Jack · 2011-09-23T13:35:41.014Z · LW(p) · GW(p)

Photons having mass would screw up the Standard Model too... right?

Replies from: RolfAndreassen
comment by RolfAndreassen · 2011-09-23T18:58:29.739Z · LW(p) · GW(p)

Not necessarily. (Disclaimer: Physics background but this is not my area of expertise; I am working from memory of courses I took >5 years ago). In electroweak unification, there are four underlying gauge fields, superpositions of which make up the photon, W bosons, and Z boson. You have to adjust the coefficients of the combinations very carefully to make the photon massless and the weak bosons heavy. You could adjust them slightly less carefully and have an extremely light, but not massless, photon, without touching the underlying gauge fields; then you can derive Maxwell and whatnot using the gauge fields instead of the physical particles, and presumably save SR as well.

Observe that the current experimental upper limit on the photon mass (well, I say current - I mean, the first result that comes up in Google; it's from 2003, but not many people bother with experimental bounds on this sort of thing) is 7x10^{-19} eV, or what we call in teknikal fiziks jargon "ridiculously tiny".

Replies from: prase
comment by prase · 2011-09-26T11:19:14.863Z · LW(p) · GW(p)

SR doesn't depend on behaviour of gauge fields. Special relativity is necessary to have a meaningful definition of "particle" in field theory. The gauge fields have to have zero mass term because of gauge invariance, not Lorentz covariance. The mass is generated by interaction with Higgs particle, this is essentially a trick which lets you forget gauge invariance after the model is postulated. It doesn't impose any requirements on SR either.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2011-09-26T18:21:52.967Z · LW(p) · GW(p)

I was thinking of how Lorentz invariance was historically arrived at: From Maxwell's equations. If the photon has mass, then presumably Maxwell does not exactly describe its behaviour (although with the current upper bound it will be a very good approximation); but the underlying massless gauge field may still follow Maxwell.

Replies from: prase
comment by prase · 2011-09-27T12:34:33.903Z · LW(p) · GW(p)

First we may clarify what is exactly meant by "following Maxwell". For example in electrodynamics (weak interaction switched off) there is interaction between electron field and photons. Is this Maxwell? Classical Maxwell equations include the interaction of electromagnetic field and current and charge densities, but they don't include equation of motion for the charges. Nevertheless, we can say that in quantum electrodynamics

  1. photon obeys Maxwell, because the electrodynamics Lagrangian is identical to the classical Lagrangian which produces Maxwell equations (plus equations of motion for the charges)
  2. photon doesn't obey Maxwell, because due to quantum corrections there is an extremely weak photon self-interaction, which is absent in classical Maxwell.

See that the problem has nothing to do with masses (photons remain massless in QED), Glashow-Weinberg-Salam construction of electroweak gauge theory or Higgs boson. The apparent Maxwell violation (here, scattering of colliding light beams) arise because on quantum level one can't prevent the electron part of the Lagrangian from influencing the outcome even if there are no electrons in the initial and final state. Whether or not is this viewed as Maxwell violation is rather choice of words. The electromagnetic field still obeys equations which are free Maxwell + interaction with non-photon fields, but there are effects which we don't see in the classical case. Also, those violations of Maxwell are perfectly compatible with Lorentz covariance.

In the case of vector boson mass generation, one may again formulate it in two different ways:

  1. the vector boson follows Maxwell, since it obeys equations which are free Maxwell + interaction with Higgs
  2. it doesn't follow Maxwell, because the interaction with Higgs manifests itself as effective mass

Again this is mere choice of words.

Now you mentioned the linear combinations of non-physical gauge fields which give rise to physical photon and weak interaction bosons. The way you put it it seems that the underlying fields, which correspond to U(1) and SU(2) gauge group generators, are massless and the mass arises somehow in the process of combining them together. This is not the case. The underlying fields all interact with Higgs and therefore are all massive. Even if the current neutrino affair lead to slight revision of photon masslessness, the underlying fields would be "effectively massive" by interaction with Higgs (I put "effectively massive" in quotes because it's pretty weird to speak about effective properties of fields which are not measurable).

Of course, your overall point is true - there is no fundamental reason why photon couldn't obtain a tiny mass by the Higgs mechanism. Photon masslessness isn't a theoretical prediction of the SM.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2011-09-27T18:26:53.165Z · LW(p) · GW(p)

Ok, I sit corrected. This is what happens when an experimentalist tries to remember his theory courses. :)

comment by JoshuaZ · 2011-09-24T23:48:17.509Z · LW(p) · GW(p)

Ok. I think there's one thing that should be stated explicitly in this thread that may not have been getting enough attention (and which in my own comments I probably should have been more explicit.)

The options are not "CERN screwed up" and "neutrinos can move faster than c." I'm not sure about the actual probabilities but P(neutrinos can move faster than c|CERN didn't screw up) is probably a lot less than P(Weird new physics that doesn't require faster than light particles|CERN didn't screw up).

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2011-09-25T00:22:30.505Z · LW(p) · GW(p)

I did say "Error caused by new physical effect. P = 0.15" right in the first comment in this thread. It's just that we don't know enough about the design of the experiment to say much about it. Do you know how the neutrinos were generated/detected?

Replies from: JoshuaZ
comment by JoshuaZ · 2011-09-25T00:33:57.302Z · LW(p) · GW(p)

The neutrino generation is somewhat indirect. Protons are accelerated into graphite, and then the resulting particles are accelerated further in the correct direction so that they decay into muons and muon neutrinos. The muons are quickly lost (muons don't like to interact with much but a few kilometers of solid rock will block most of them). The detector itself is setup to detect specifically the neutrinos which have oscillated into tau neutrinos.

The detector itself is a series of lead plates with interwoven layers of light-sensitive material which has then scintillator counters to detect events in the light sensitive stuff. I don't fully understand the details for the detector. (In particular I don't know how they are differentiating tau neutrinos hitting the lead plates from muon neutrinos or electron neutrinos) but I naively presume that there's some set of characteristic reactions which occur for the tau neutrinos and not the other two. Since this discrepancy is for neutrinos in general, and they seem to be picking up data for all the neutrinos (I think?) that should't be too much of an issue.

I've heard so far only a single hypothesis of new physics without faster than light travel involving suppression of virtual particles and I don't have anywhere near the expertise to guess if that sort of thing is at all plausible.

Replies from: Dreaded_Anomaly
comment by Dreaded_Anomaly · 2011-09-25T06:35:27.452Z · LW(p) · GW(p)

The detector itself is a series of lead plates with interwoven layers of light-sensitive material which has then scintillator counters to detect events in the light sensitive stuff. I don't fully understand the details for the detector. (In particular I don't know how they are differentiating tau neutrinos hitting the lead plates from muon neutrinos or electron neutrinos) but I naively presume that there's some set of characteristic reactions which occur for the tau neutrinos and not the other two.

There is a conserved quantity* for elementary particles that is called "lepton number." It is defined such that leptons (electrons, muons, taus, and their respective neutrinos) have lepton number +1, and anti-leptons (positrons, antimuons, antitaus, and antineutrinos) have lepton number -1. Further, the presence of each flavor (electron, muon, tau) is conserved between the particles and the corresponding neutrinos.

For example, take the classic beta decay. A neutron decays to a proton, an electron, and an electron antineutrino. The neutron is not a lepton, so lepton number must be conserved at zero. The electron has lepton number +1 and the electron antineutrino has lepton number -1, totaling zero, and the "electron" flavor is conserved between the two of them.

Now, think about an inverse beta decay: an electron antineutrino combines with a proton to form a neutron and a positron. The electron antineutrino has lepton number -1, and so does the positron that is created; again, the "electron" flavor is conserved.

How does this apply to tau neutrinos? Reactions similar to an inverse beta decay occur when the other flavors of neutrinos interact with particles in the detector, but their flavors must be conserved, too. So, when a tau neutrino interacts, it produces a tau particle. A tau can be distinguished from an electron or muon in the detector by its mass and how it decays.

*This conservation is actually violated by neutrino oscillations, but it still holds in most other interactions.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-09-25T14:31:16.092Z · LW(p) · GW(p)

Ok. That was basically what I thought was happening. Thanks for clarifying.

comment by prase · 2011-09-23T12:06:36.258Z · LW(p) · GW(p)

My probability distribution of explanations:

  • Neutrinos move faster than light in vacuum: P = 0.001
  • Error in distance measurement P = 0.01
  • Error in time measurement P = 0.4
  • Error in calculation P = 0.1
  • Error in identification of incoming neutrinos P = 0.1
  • Statistical fluke P = 0.1
  • Outright fraud, data manipulation P = 0.05
  • Other explanation 0.239
Replies from: bogdanb, Jack
comment by bogdanb · 2011-09-23T12:41:51.062Z · LW(p) · GW(p)

Having read the preprint, about the only observation is that I think you’re overestimating the fraud hypothesis.

There’s almost a whole page of authors, the preprint describes only the measurement, and finishes with something like (paraphrasing) “we’re pretty sure of seeing the effect, but given the consequences of this being new physics we think more checking is needed, and since we’re stumped trying to find other sources of error, we publish this to give others a try too; we deliberately don’t discuss any possible theoretical implications.”

At the very least, this is the work of the aggregate group trying very hard to “do it right”; I guess there could still be one rogue data manipulator, but I would give much less than 1 in 20 that nobody else in the group noticed anything funny.

comment by Jack · 2011-09-23T13:15:11.612Z · LW(p) · GW(p)

Your statistical fluke estimate is too high, experiment was repeated like 16,000 times.

Replies from: prase, Mercurial
comment by prase · 2011-09-23T13:42:17.920Z · LW(p) · GW(p)

They 1) have measured 16,000 neutrinos and found each one above c, or 2) they run the experiment 16,000 times, each run consisting of many measurements, and found that each run produced the result, or 3) they measured 16,000 neutrinos, analysed the data once and found that on average the velocity is higher than c, with 6σ significance?

Replies from: Jack, FAWS
comment by Jack · 2011-09-23T15:10:22.386Z · LW(p) · GW(p)

Yeah, it's more complicated than all of those but (3) is the closest.

comment by FAWS · 2011-09-23T14:52:08.556Z · LW(p) · GW(p)

That doesn't exhaust all possibilities, though it seems to have been 3).

comment by Mercurial · 2011-09-23T16:40:00.718Z · LW(p) · GW(p)

Bear in mind that many parapsychological experiments have been repeated vastly more than that. My impression is that anyone who wants to argue that this is extremely unlikely to be a statistical fluke is going to have a much harder time viewing parapsychology as the control group for science.

Replies from: Jack
comment by Jack · 2011-09-23T17:18:09.626Z · LW(p) · GW(p)

The comparison to parapsychology is a really poor one in this case-- for what should be pretty obvious reasons. For example, we know there is no file drawer effect. What we know about neutrino speed so far comes from a)Supernova measurements which contradict these results but measured much lower energy neutrinos and b)direct measurements that didn't have the sample size or the timing accuracy to reveal the anomaly OPERA discovered.

But more importantly this was a six sigma deviation from theoretical prediction. As far as I know, that is unheard of in parapsychology.

We cannot treat physics the way we treat psychology.

Replies from: Mercurial
comment by Mercurial · 2011-09-23T22:09:52.556Z · LW(p) · GW(p)

The comparison to parapsychology is a really poor one in this case-- for what should be pretty obvious reasons.

Well, whatever this might say about me, the reasons aren't obvious to me.

For example, we know there is no file drawer effect.

Right, but as I understand it, you don't need a file drawer effect to see that some of the experiments done in parapsychology still have devastatingly tiny p-values on their own, such as through the Stanford Research Institute.. So the file drawer effect isn't really the right way to challenge the analogy.

But more importantly this was a six sigma deviation from theoretical prediction. As far as I know, that is unheard of in parapsychology.

I actually don't know what that means. Is sigma being used to indicate standard deviation? If so, then yes, there have been a number of parapsychology experiments that went in that range of accuracy - some moreso if I recall correctly. (It has been many years since I read into that stuff, so I could be misremembering.)

We cannot treat physics the way we treat psychology.

My point is actually more about statistics than science, so any system that uses frequentist statistics to extract truth is going to suffer from this kind of comparison. As I understand it, the statistical methods that are used to verify measurements like this FTL neutrino phenomenon are the same kinds of techniques used to demonstrate that people can psychokinetically affect random-number generators. So either parapsychology is ridiculous because it uses bad statistical methods (in which case there's a significant chance that this FTL finding is a statistical error), or we can trust the statistical methods that CERN used (which seems to force us to trust the statistical methods that parapsychologists use.)

(Disclaimer: I'm not trying to argue anything about parapsychology here. I'm only attempting to point out that, best as I can tell, the argument for parapsychology as the control group for science seems to suggest that the CERN results stand a fair chance of being bad statistics in action. If A implies B and we're asserting probably-not-B, then we have to accept probably-not-A.)

Replies from: Jack
comment by Jack · 2011-09-24T00:15:39.545Z · LW(p) · GW(p)

Right, but as I understand it, you don't need a file drawer effect to see that some of the experiments done in parapsychology still have devastatingly tiny p-values on their own, such as through the Stanford Research Institute.. So the file drawer effect isn't really the right way to challenge the analogy.

How is that?

I actually don't know what that means. Is sigma being used to indicate standard deviation? If so, then yes, there have been a number of parapsychology experiments that went in that range of accuracy - some moreso if I recall correctly. (It has been many years since I read into that stuff, so I could be misremembering.)

You need to provide links because I read a fair bit on the subject and don't recall this. If I came across such results my money would be on fraud of systematic error- not a statistical fluke.

So either parapsychology is ridiculous because it uses bad statistical methods (in which case there's a significant chance that this FTL finding is a statistical error), or we can trust the statistical methods that CERN used (which seems to force us to trust the statistical methods that parapsychologists use.)

This is the kind of "outside-view-taken to the extreme" attitude that just doesn't make sense. We know why the statistical results of para-psychological studies tend to not be trustworthy- publication bias, file drawer effect, exploratory research turned into hypothesis testing retroactively etc. If we didn't know why such statistical results couldn't be trusted the we would be compelled to seriously consider para-psychological claims. My claim is that those reasons don't apply to neutrino velocity measurements.

Replies from: Mercurial
comment by Mercurial · 2011-09-24T22:20:33.638Z · LW(p) · GW(p)

You need to provide links because I read a fair bit on the subject and don't recall this.

That's a fair request. I don't really have the time to go digging for those details, though. If you feel so inspired, again I'd point to the work done at the Stanford Research Institute (or at least I think it was that) where they did a ridiculous number of trials of all kinds and did get several standard deviations away from the expected mean predicted based on the null hypothesis. I honestly don't remember the numbers at all, so you could be right that there has never been anything like a six-s.d. deviation in parapsychological experiments. I seem to recall that they got somewhere around ten - but it has been something like six years since I read anything on this topic.

That said, I get the feeling there's a bit of goalpost-moving going on in this discussion. In Eliezer's original reference to parapsychology as the control group for science, his point was that there are some amazingly subjective effects that come into play with frequentist statistics that could account for even the good (by frequentist standards) positive-result studies from parapsychology. I agree, there's a lot of problem with things like publication bias and the like, and that does offer an explanation for a decent chunk of parapsychology's material. But to quote Eliezer:

Parapsychology, the control group for science, would seem to be a thriving field with "statistically significant" results aplenty. Oh, sure, the effect sizes are minor. Sure, the effect sizes get even smaller (though still "statistically significant") as they collect more data. Sure, if you find that people can telekinetically influence the future, a similar experimental protocol is likely to produce equally good results for telekinetically influencing the past. Of which I am less tempted to say, "How amazing! The power of the mind is not bound by time or causality!" and more inclined to say, "Bad statistics are time-symmetrical." But here's the thing: Parapsychologists are constantly protesting that they are playing by all the standard scientific rules, and yet their results are being ignored - that they are unfairly being held to higher standards than everyone else. I'm willing to believe that. It just means that the standard statistical methods of science are so weak and flawed as to permit a field of study to sustain itself in the complete absence of any subject matter.

I haven't looked at the CERN group's methods in enough detail to know if they're making the same kind of error. I'm just trying to point out that we can't assign an abysmally low probability to their making a common kind of statistical error that finds a small-but-low-p-value effect without simultaneously assigning a lower probability to parapsychologists making this same mistake than Eliezer seems to.

And to be clear, I am not saying "Either the CERN group made statistical errors or telepathy exists." Nor am I trying to defend parapsychology. I'm simply pointing out that we have to be even-handed in our dismissal of low-p-value thinking.

Replies from: wedrifid
comment by wedrifid · 2011-09-24T22:43:08.179Z · LW(p) · GW(p)

Sure, if you find that people can telekinetically influence the future, a similar experimental protocol is likely to produce equally good results for telekinetically influencing the past. Of which I am less tempted to say, "How amazing! The power of the mind is not bound by time or causality!" and more inclined to say, "Bad statistics are time-symmetrical."

That doesn't actually strike me as all that much extra improbability. A whole bunch of the mechanisms would allow both!

comment by MartinB · 2011-09-24T00:22:24.151Z · LW(p) · GW(p)

Can it be used to send messages?

Replies from: Baughn
comment by Baughn · 2011-09-24T15:05:15.101Z · LW(p) · GW(p)

Yes.

comment by JoshuaZ · 2011-10-02T22:10:11.013Z · LW(p) · GW(p)

Relevant updates:

John Costella has a fairly simple statistical analysis which strongly suggests that the the OPERA data is statistically significant (pdf). This of course doesn't rule out systematic problems with the experiment which still seem to be the most likely.

Costella has also proposed possible explanations of the data. See 1 and 2. These proposals focus on the idea of a short-lived tachyon. This sort of explanation helps explain the SN 1987a data. Costella points out that if the muon-neutrino pair is becoming tachyonic through the initial hadron barrier at the end of the accelerator that this would explain the data very well. The barrier has a distance of 18.2 meters which is very close to the claimed discrepancy. Costella proposes that they become tachyonic due to natural behavior of the Higgs field and I don't have anywhere near the expertise to evaluate how reasonable that is, although he points out potential empirical problems with this hypothesis. Note that this hypothesis seems to be one of the easiest of the new physics hypotheses to test since one just makes the barrier longer and see if the neutrinos arrive sooner.

Overall, interesting and surprisingly plausible, but I'm still betting on some form of error.

comment by Jack · 2011-09-23T20:55:53.703Z · LW(p) · GW(p)

More relevant papers:

"Neutrinos Must Be Tachyons" (1997)

Abstract: The negative mass squared problem of the recent neutrino experiments from the five major institutions prompts us to speculate that, after all, neutrinos may be tachyons. There are number of reasons to believe that this could be the case. Stationary neutrinos have not been detected. There is no evidence of right handed neutrinos which are most likely to be observed if neutrinos can be stationary. They have the unusual property of the mass oscillation between flavors which has not been observed in the electron families. While Standard Model predicts the mass of neutrinos to be zero, the observed spectrum of Tritium decay experiments hasn't conclusively proved that the mass of neutrino is exactly zero. Based upon these observations and other related phenomena, we wish to argue that there are too many inconsistencies to fit neutrinos into the category of ordinary inside light cone particles and that the simplest possible way to resolve the mystery of the neutrino is to change our point of view and determine that neutrinos are actually tachyons.

This guy seems like someone a competent science journalist would be interviewing. I can't say I understand much of it, unfortunately.

Replies from: see, Oscar_Cunningham
comment by see · 2011-09-24T08:15:31.597Z · LW(p) · GW(p)

Tachyonic neutrinos can explain SN 1987A neutrinos beating photons to Earth, and tachyonic neutrinos can explain the CERN observations, but, critically, they cannot explain both phenomena simultaneously. The SN 1987A neutrinos apparently moved slower than the CERN neutrinos, when the pure tachyonic explanation would have them move faster than the CERN neutrinos.

This isn't to say neutrinos couldn't be tachyons, but it would still leave the CERN data requiring an explanation.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-09-24T13:55:33.358Z · LW(p) · GW(p)

Your point is correct. But I'd also like to note that in case anyone thinks that SN 1987A is a problem for physics- the conventional model explains SN 1987A neutrinos beating the photos to Earth. Neutrinos are produced in the core of a star when it goes supernova. Light has to slowly works its way out from the core going through all the matter, or is produced at the very upper stages of the star. Neutrinos don't interact with much matter so they get to go through quickly and so get a few hours head start. Since they are traveling very close to the speed of light they can arrive before the light.

This is the conventional explanation. If neutrinos routinely traveled faster than light, we'd expect the SN 1987A neutrinos to have arrived even earlier than the three hours they arrived before the light. In particular, if they traveled as fast as CERN predicted then they should have arrived about 3-5 year before the photons. Now, we didn't have good neutrino detectors much before 1987 so it is possible that there was a burst we missed in that time range. But if so, why was there a separate pack of much slower neutrinos that arrived when we expected?

There may be possible explanations for this that fit both data . It is remotely possible for example that the tauon and muon neutrinos are tachyons but the electron neutrino is not, or that all but the electron neutrino are tachyons. If one then monkeyed with the oscillation parameters it might be possible to get that the CERN sort of beam would arrive fast but the beam from SN 1987A would arrive at the right time. I haven't worked the numbers out, but my understanding is that we have not awful estimates for the oscillation behavior which should prevent this kiudge from working. It might work if one had another type of neutrino since that would give you six more parameters to play with. Other experiments can upper bound the number of neutrino types with a high probability, and the standard estimates say that there probably aren't more than 6 neutrino types. So there is room here.

I don't know enough about the underlying physics to evaluate how plausible this sort of thing is. Right now it seems that a lot of people are brainstorming different ideas.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2011-09-24T17:45:04.029Z · LW(p) · GW(p)

In particular, if they traveled as fast as CERN predicted then they should have arrived about 3-5 year before the photons.

Is this just assuming that they travel at the same speed as recorded for the CERN ones, or has any adjustment been made for their differing energies?

Replies from: JoshuaZ
comment by JoshuaZ · 2011-09-24T18:17:57.861Z · LW(p) · GW(p)

This is from a naive, back of the envelope calculation without taking differing energies into account. One thing to note that by some estimates tachyons should slow down as they get more energy. If that's the case then the discrepancy may make sense since the neutrinos from the supernova should be I think higher energy.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2011-09-24T18:42:35.479Z · LW(p) · GW(p)

If that's the case then the discrepancy may make sense since the neutrinos from the supernova should be I think higher energy.

Nope. As I said here the ones at CERN are 17GeV, whereas the ones from the supernova were 6.7MeV.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-09-24T20:13:56.571Z · LW(p) · GW(p)

Ok. In that case this hypothesis seriously fails.

comment by Oscar_Cunningham · 2011-09-23T22:38:10.256Z · LW(p) · GW(p)

I hadn't realised is that neutrinos have never been observed going slower than light. If they had been observed going slower than light, then finding them also going faster would be absurd, since it would require infinite energy. But if they are always tachyons then them travelling faster than c is much less problematic.

However I don't see how this explains the neutrinos from the supernova. In the paper it says that higher energies correspond to lower speeds (due to imaginary mass). The ones at CERN are 17GeV, whereas the ones from the supernova were 6.7MeV. But the difference in time for the supernova was proportionately smaller than that for the CERN neutrinos.

Replies from: DanielLC
comment by DanielLC · 2011-09-24T01:35:53.508Z · LW(p) · GW(p)

Perhaps CERN's experiment was in error.

So, even if neutrinos really do go faster than light, CERN messed up.

comment by Oscar_Cunningham · 2011-09-23T11:12:21.126Z · LW(p) · GW(p)

The neutrinos are not going faster than light. P = 1-10^-8

Error caused by some novel physical effect: P = 0.15

Human error accounts for the effect (i.e. no new physics): P= 0.85

This isn't even worth talking about unless you know a serious amount about the precise details of the experiment.

EDIT: Serious updating on the papers Jack links to downthread. I hadn't realised that neutrinos have never been observed going slower than light. P = no clue whatsoever.

Replies from: Kevin, Thomas
comment by Kevin · 2011-09-23T11:19:25.009Z · LW(p) · GW(p)

This isn't even worth talking about unless you know a serious amount about the precise details of the experiment.

I'm stupid so I shouldn't talk about physics? That's absurd, Less Wrong is devoted to discussing exactly this kind of thing. Like... really? I'm really confused by your comment. Do you think the author of the Nature News piece should not have written for fear of causing people to think about a result?

This kind of comment you made is one of the most perniciously negative types of things you could say here. Please try not to stop discussion before it even starts.

Instead of shutting down discussion and saying it isn't worth talking about, maybe you should try and expand on "Error caused by some novel physical effect".

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2011-09-23T11:52:43.684Z · LW(p) · GW(p)

I'm stupid so I shouldn't talk about physics?

You're not stupid, but we're not (as far as I know) qualified to talk about this particular experiment. There's no hope in hell that the particles are going faster than light, so the only interesting discussion is what else could be causing the effect. This would involve an in depth knowledge of particle physics, as well as the details of the experiment, how the speed was calculated, the type of detector being used, etc. I don't work at CERN, and I don't think many LessWrongers do either.

Less Wrong is devoted to discussing exactly this kind of thing.

LessWrong is for discussing rationality not physics. Assigning probabilities to the outcomes stretched my rationalist muscles (I wasn't sure about 10^-8. Too high? Too low?), but that's the only relevance this post has (and yes, I did downvote it).

Do you think the author of the Nature News piece should not have written for fear of causing people to think about a result?

It would be fine to report the anomalous result, and give an interesting exploration of what faster than light particles would imply, making it clear that it's horrendously unlikely. But presenting it as if the particles might actually be going faster than light is misleading.

Instead of shutting down discussion and saying it isn't worth talking about, maybe you should try and expand on "Error caused by some novel physical effect".

I've heard that the detector works by having the neutrinos hit a block where they produce some secondary particles, the results are then inferred from these particles. If these particles are doing something novel, or if the neutrinos are producing an unexpected kind of particle, then this could lead to the errors observed.

EDIT: I'm being too harsh. LessWrongers with less knowledge of the relevant physics would be perfectly justified in assigning a much higher probability to FTL than I do, and they've got no particular reason to update on my belief. Similarly, I expect my probability assignment would change if I learnt more physics.

Replies from: khafra, XiXiDu
comment by khafra · 2011-09-23T12:09:19.504Z · LW(p) · GW(p)

I wasn't sure about 10^-8. Too high? Too low?

I believe I am more skeptical than the average educated person about press releases claiming some fundamental facet of physics is wrong. But I would happily bet $1 against $10,000,000 that they have, indeed, observed neutrinos going faster than the currently understood speed of light.

Replies from: Kevin, Oscar_Cunningham
comment by Kevin · 2011-09-23T12:13:47.761Z · LW(p) · GW(p)

Taken! Paypal address?

Replies from: khafra
comment by khafra · 2011-09-23T13:53:17.333Z · LW(p) · GW(p)

I'd rather do it through an avenue other than Paypal, since I give odds near unity that if I won, Paypal would freeze my account before I could withdraw the $10 million. Also, considering that less than .01% of the world's population has access to $10 million USD in a reasonably liquid form, there's some counterparty risk.

But, IIRC, you're confident you have the resources to produce a subplanetary mass of paperclips within a few decades, so let's do it!

Replies from: Kevin
comment by Kevin · 2011-09-23T14:58:52.115Z · LW(p) · GW(p)

Oh, sorry, I was confused and thought you were offering the bet the other way around.

Replies from: khafra
comment by khafra · 2011-09-23T15:04:36.888Z · LW(p) · GW(p)

I apologize for being ambiguous; I should have been more clear that 10^-8 was way too low. Hopefully you weren't counting on those resources for manufacturing paperclips.

comment by Oscar_Cunningham · 2011-09-23T12:15:21.603Z · LW(p) · GW(p)

Sadly I'm not in possession of even 10^8 cents, so I can't make this bet.

Replies from: khafra
comment by khafra · 2011-09-23T13:35:35.427Z · LW(p) · GW(p)

If you have a bitcoin address, the smallest subdivision of a bitcoin against 1 bitcoin (historically, 1 bitcoin has been worth somewhere within $10 of $10) would do the tric.

comment by XiXiDu · 2011-09-23T13:36:39.958Z · LW(p) · GW(p)

…even Ereditato says it’s way too early to declare relativity wrong. “I would never say that,” he says. Rather, OPERA researchers are simply presenting a curious result that they cannot explain and asking the community to scrutinize it. “We are forced to say something,” he says. “We could not sweep it under the carpet because that would be dishonest.”

From here.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2011-09-23T13:55:25.040Z · LW(p) · GW(p)

Which part of my post is this addressed to? I don't see any direct relevance.

comment by Thomas · 2011-09-23T11:17:55.626Z · LW(p) · GW(p)

Or the light is slightly subluminal and the nevtrinos are (almost) luminal at their speed.

May be a bunch of reasons, more probable than the assumed one.

Replies from: prase
comment by prase · 2011-09-23T12:08:02.357Z · LW(p) · GW(p)

What do you mean by that light is subluminal? Literally it means that light travels slower than light, which is probably not the intended meaning.

Replies from: None, Thomas, Oscar_Cunningham
comment by [deleted] · 2011-09-23T12:47:26.037Z · LW(p) · GW(p)

I suspect he means that light maybe travels slightly slower than the constant c used in relativity. Maybe photons actually have a really tiny rest-mass. Maybe our measurements of the speed of light are all in non-perfect vacuum which makes it slow down a little bit.

Replies from: prase
comment by prase · 2011-09-23T13:54:13.355Z · LW(p) · GW(p)

If they had tiny mass, we would observe variance in measured values of c, since less energetic photons would move slower. Measurements of c have relative precision of at least 10^-7 and no dependence on energy has been observed in the vacuum. Therefore the measured speed of light doesn't differ from the relativistic c more than by 10^-7. The relative difference which is reported in the neutrinost seems to be 10^-5.

comment by Thomas · 2011-09-23T13:15:24.837Z · LW(p) · GW(p)

Kloth answerd as I would.

By the way, electrons in water can be faster than photons in water. No big surprise maybe, if this hapens with neutrinos and photons in a (near) vacuum.

comment by Oscar_Cunningham · 2011-09-23T12:18:06.783Z · LW(p) · GW(p)

Light can move more slowly while not in a vacuum, maybe this light was held up by something. That said, I don't understand the paper well enough to tell if they are directly racing the neutrinos against some actual light, or if they're just comparing it to an earlier mesurement.

Replies from: AlexMennen, JoshuaZ
comment by AlexMennen · 2011-09-23T15:35:37.682Z · LW(p) · GW(p)

I don't know whether this guy knows what he's talking about, but it sounds plausible:

Steven Sudit:

The speed of light in a typical vacuum false short of the speed in a perfect vacuum. Light is slowed by interaction with particles, even the virtual particles found in a vacuum. This is why it's slightly faster when passing between plates exhibiting the Casimir effect, since that's based on suppression of virtual particle creation. (http://en.wikipedia.org/wiki/Faster-than-light#Faster_light_.28Casimir_vacuum_and_quantum_tunnelling.29) So one plausible explanation is that, because of their minimal interaction, neutrinos travel at the speed of a true vacuum, slightly edging out photons.

Replies from: shminux
comment by shminux · 2011-09-23T16:27:24.313Z · LW(p) · GW(p)

There have been no indications that one can transmit information FTL using the Casimir effect, the work he mentions was on quantum tunneling time, which is a different beast.

comment by JoshuaZ · 2011-09-23T12:28:20.062Z · LW(p) · GW(p)

That doesn't work. They didn't race the neutrinos against a light beam. They measured the distance to the detector using sensitive GPS.

Replies from: Thomas
comment by Thomas · 2011-09-23T13:16:06.605Z · LW(p) · GW(p)

Are they THAT sensitive? Possibly not.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-09-23T13:23:49.691Z · LW(p) · GW(p)

In order for this to be from an error in measurement you need to be a few meters off (18 meters if that's the only problem). There are standard GPS techniques and surveying techniques which can be used to get very precise values. They state in the paper and elsewhere that they are confident to around 30 cm. Differential GPS can have accuracy down to about 10-15 cm, and careful averaging of standard GPS can get you in the range of 20 cm, so this isn't at all implausible but it is still a definite potential source of error.

A more plausible issue is that since parts of the detectors are underground they didn't actively use GPS for those parts. But even then, a multiple meter error seems unlikely, and 18 meters is a lot. It is possible that there's a combination of errors all going in the same direction, say a meter error in the distance, a small error in the clock calibration, etc. And all of that add up even as each error remains small enough that it is difficult to detect. But they've been looking at things really closely so one would then think that at least one of the errors would turn up.

comment by JoshuaZ · 2011-09-26T16:12:48.955Z · LW(p) · GW(p)

There's now a theoretical paper up on the arxiv discussing a lot of these issues . The authors are respected physics people it seems. I have neither the time nor the expertise to evaluate it, but they seem to be claiming a resolution between the OPERA data and the SN 1987A data.

comment by Craig_Heldreth · 2011-09-25T15:26:37.457Z · LW(p) · GW(p)

The best short form critique of this announcement I have seen is the post by theoretical physicist Matthew Buckley on the metafilter website:

Matt's comment.

After I read that comment I clicked through to his personal website and I found a nifty layman's explanation of the necessity for Dark Matter in current cosmo theoy:

Matt's web essay on dark matter.

If you don't have time to read his comment, what he says is that the results are not obviously bogus but they are so far-fetched that almost no physicists will find their daily work affected by the provisionally conceivable status of these results until a huge amount of double- triple- quadruple- and quintuple checking verifies them.

comment by beoShaffer · 2011-09-23T16:16:54.332Z · LW(p) · GW(p)

Obligatory xkcd reference http://xkcd.com/955/

Replies from: NihilCredo
comment by NihilCredo · 2011-09-23T20:16:50.784Z · LW(p) · GW(p)

p ( someone here cares aout this stuff but does not också check XCKD) = FAT BLOODY CHANGE I MEAN FAT BLOODY CHANCE

i should really fix the spelling above but its been a logn time since I was downvoted ISN"T THAT EXCITING

(it isn't)

(i still will post this)

(doing it now)

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2011-09-24T00:42:24.091Z · LW(p) · GW(p)

I ask out of sheer curiosity, and you by no means need to answer if you don't want to. But were you inebriated, sleep-deprived, or in another abnormal mental state when you posted this?

Replies from: NihilCredo, pedanterrific
comment by NihilCredo · 2011-09-24T05:08:24.389Z · LW(p) · GW(p)

I, in fact, was. My apologies for the interruption.

Replies from: MixedNuts
comment by MixedNuts · 2011-09-26T13:42:46.302Z · LW(p) · GW(p)

I didn't know you were Swedish! Your profile says you're in Uppsala. Wanna meet in Stockholm sometime?

Replies from: NihilCredo
comment by NihilCredo · 2011-09-27T02:24:07.108Z · LW(p) · GW(p)

How about we make it into a proper Stockholm meetup?

Replies from: MixedNuts
comment by MixedNuts · 2011-09-27T15:42:15.763Z · LW(p) · GW(p)

Yup.

Replies from: NihilCredo
comment by pedanterrific · 2011-09-24T02:51:29.333Z · LW(p) · GW(p)

I'm going to go out on a limb here and say yes.

comment by Dreaded_Anomaly · 2011-09-25T06:42:33.846Z · LW(p) · GW(p)

Sean Carroll has made a second blog post on the topic, to explain why faster-than-light neutrinos do not necessarily imply time travel.

The usual argument that faster than light implies the ability to travel on a closed loop assumes Lorentz invariance; but if we discover a true FTL particle, your first guess should be that Lorentz invariance is broken. (Not your only possible guess, but a reasonable one.) Consider, for example, the existence of a heretofore unobserved fluid pervading the universe with a well-defined rest frame, that neutrinos interact with but photons do not. Or a vector field with similar properties. There are various ways we could imagine some background that actually picks out a preferred frame of reference, violating Lorentz invariance spontaneously.

And, just to reiterate the main point:

The odds are still long against the OPERA result being right at face value. But even if it’s right, it doesn’t immediately imply that neutrinos are time-travelers.

comment by Dreaded_Anomaly · 2011-09-24T19:02:59.882Z · LW(p) · GW(p)

To quote one of my professors, from the AP release:

Drew Baden, chairman of the physics department at the University of Maryland, said it is far more likely that there are measurement errors or some kind of fluke. Tracking neutrinos is very difficult, he said.

"This is ridiculous what they're putting out," Baden said, calling it the equivalent of claiming that a flying carpet is invented only to find out later that there was an error in the experiment somewhere. "Until this is verified by another group, it's flying carpets. It's cool, but ..."

Also, Sean Carroll wrote a blog post which gives a good description of the physics and links to several other posts on the topic.

comment by PhilGoetz · 2011-09-23T17:00:14.157Z · LW(p) · GW(p)

Forgive my ignorance, but... if distance is defined in terms of the time it takes light to traverse it, what's the difference between "moving from A to B faster than the speed of light" and "moving from B to A"?

Replies from: DanielLC, orthonormal, Owen
comment by DanielLC · 2011-09-24T01:40:15.288Z · LW(p) · GW(p)

There's three things you can do:

  • Move from A to B.
  • Move between A and B faster than the speed of light. (It's uncertain which is the start and which is the end.)
  • Move from B to A.
comment by orthonormal · 2011-09-23T22:15:41.039Z · LW(p) · GW(p)

For the basic physics answer, look at Minkowski space: you can define when two events shouldn't be able to effect each other at all if nothing travels faster than light (i.e. they're separated by a spacelike interval).

More basically, we know the direction of causality from other factors; so if the neutrinos are emitted at A and interact with something at B, and both events increase entropy, then you either have to say that they traveled faster than light or that they violated the Second Law of Thermodynamics.

comment by Owen · 2011-09-23T17:14:19.408Z · LW(p) · GW(p)

You are correct: moving from A to B faster than the speed of light in one reference frame is equivalent to moving from B to A faster than the speed of light in another reference frame, according to special relativity.

Replies from: PhilGoetz
comment by PhilGoetz · 2011-09-23T22:07:49.866Z · LW(p) · GW(p)

Second 'faster' should be 'slower', I think.

Replies from: Owen, shinoteki
comment by Owen · 2011-09-23T23:00:22.710Z · LW(p) · GW(p)

Shinoteki is right - moving slower than light is timelike, while moving faster than light is spacelike. No relativistic change of reference frame will interchange those.

Replies from: PhilGoetz
comment by PhilGoetz · 2011-09-24T14:34:51.557Z · LW(p) · GW(p)

What do you mean by "spacelike"?

IIRC, movement in spacetime is the same no matter which axis you designate as being time.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-09-24T15:11:19.130Z · LW(p) · GW(p)

IIRC, movement in spacetime is the same no matter which axis you designate as being time.

No. The metric treats time differently from space even as they are all on a single manifold. The Minkowski metric has three spacial dimensions with a +, and time gets a -. This is why space and time are different. Thinking of spacetime as R^4 is misleading because one doesn't have the Euclidean metric on it.

comment by shinoteki · 2011-09-23T22:37:32.567Z · LW(p) · GW(p)

It shouldn't. Moving from B to A slower than light is possible*, moving from A to B faster than light isn't, and you can't change whether something is possible by changing reference frames.

*(Under special relativity without tachyons)

Replies from: PhilGoetz, PhilGoetz
comment by PhilGoetz · 2011-09-24T14:46:37.425Z · LW(p) · GW(p)

What I'm trying to get at is, What does a physicist mean when she says she saw X move from A to B faster than light? The measurement is made from a single point; say A. So the physicist is at A, sees X leave at time tX, sends a photon to B at time t0, and gets a photon back from B at time t1, which shows X at B at some time tB. I'm tempted to set tB = (t0+t1)/2, but I don't think relativity lets me do that, except within a particular reference frame.

"X travelled faster than light" only means that tX < t1. The FTL interpretation is t0 < tX < tB < t1: The photon left at t0, then X left at tX, and both met at B at time tB, X travelling faster than light.

Is there a mundane interpretation under which tB < tX < t1? The photon left A at t0, met X at B at tB, causing X to travel back to A and arrive there at tX.

The answer appears to be No, because X would need to travel faster than light on the return trip. And this also explains that Owen's original answer was correct: You can say that X travelled from A to B faster than light, or from B to A faster than light.

Replies from: shinoteki
comment by shinoteki · 2011-09-24T15:18:44.962Z · LW(p) · GW(p)

An interpretation putting t1<tX seems to have the photon moving faster than light backwards in time to get from B back to A

comment by PhilGoetz · 2011-09-24T14:33:14.115Z · LW(p) · GW(p)

My question is whether he meant to say

  • moving from A to B faster than the speed of light in one reference frame is equivalent to moving from B to A faster than the speed of light in another reference frame

or

  • moving from A to B faster than the speed of light in one reference frame is equivalent to moving from B to A slower than the speed of light in another reference frame

both of which involve moving faster than light.

Replies from: Owen, shinoteki
comment by Owen · 2011-09-25T04:35:04.650Z · LW(p) · GW(p)

I meant the first one: faster than light in both directions.

You can think of it this way: if any reference frame perceived travel from B to A slower than light, then so would every reference frame. The only way for two observers to disagree about whether the object is at A or B first, is for both to observe the motion as being faster than light.

comment by shinoteki · 2011-09-24T14:53:28.495Z · LW(p) · GW(p)

He's not talking about impossibility

I know Owen was not talking about impossibility, I brought up impossibility to show that what you thought Owen meant could not be true.

both of which involve moving faster than light.

Moving from B to A slower than the speed of light does not involve moving faster than light.