Ask an experimental physicist
post by RolfAndreassen · 2012-06-08T23:43:03.288Z · LW · GW · Legacy · 295 commentsContents
295 comments
In response to falenas108's "Ask an X" thread. I have a PhD in experimental particle physics; I'm currently working as a postdoc at the University of Cincinnati. Ask me anything, as the saying goes.
This is an experiment. There's nothing I like better than talking about what I do; but I usually find that even quite well-informed people don't know enough to ask questions sufficiently specific that I can answer any better than the next guy. What goes through most people's heads when they hear "particle physics" is, judging by experience, string theory. Well, I dunno nuffin' about string theory - at least not any more than the average layman who has read Brian Greene's book. (Admittedly, neither do string theorists.) I'm equally ignorant about quantum gravity, dark energy, quantum computing, and the Higgs boson - in other words, the big theory stuff that shows up in popular-science articles. For that sort of thing you want a theorist, and not just any theorist at that, but one who works specifically on that problem. On the other hand I'm reasonably well informed about production, decay, and mixing of the charm quark and charmed mesons, but who has heard of that? (Well, now you have.) I know a little about CP violation, a bit about detectors, something about reconstructing and simulating events, a fair amount about how we extract signal from background, and quite a lot about fitting distributions in multiple dimensions.
295 comments
Comments sorted by top scores.
comment by Shmi (shminux) · 2012-06-09T03:27:56.242Z · LW(p) · GW(p)
In response to falenas108's "Ask an X" thread. I have a PhD in experimental particle physics; I'm currently working as a postdoc at the University of Cincinnati. Ask me anything, as the saying goes.
Since we are experimenting here... I have a PhD in theoretical physics (General Relativity), and I'd be happy to help out with any questions in my area.
Replies from: None, komponisto, Cthulhoo, None, bogdanb, bogdanb, bogdanb, army1987↑ comment by [deleted] · 2012-06-09T04:24:54.998Z · LW(p) · GW(p)
This Reddit post says things like:
And then the point goes out. All at once, as if God turned off the switch. You have crossed the event horizon of the black hole.
and:
But Alice cannot see Bob either, because in order to do so, she has to turn her head toward her own past. The distortion of spacetime is so great that the spatial direction in which Bob lies relative to her is actually in her past. In technical terms, any light that comes to her from Bob will fall perpendicular to her eyeballs, regardless of which direction she turns her head.
When I read this, I believed that it was wrong (but well-written, making it more dangerous!). (However, he described Gravity Probe B's verification of the geodetic effect correctly.)
Wikipedia says:
An observer crossing a black hole event horizon can calculate the moment they've crossed it, but will not actually see or feel anything special happen at that moment. In terms of visual appearance, observers who fall into the hole perceive the black region constituting the horizon as lying at some apparent distance below them, and never experience crossing this visual horizon.[7] Other objects that had entered the horizon along the same radial path but at an earlier time would appear below the observer but still above the visual position of the horizon, and if they had fallen in recently enough the observer could exchange messages with them before either one was destroyed by the gravitational singularity.[8] Increasing tidal forces (and eventual impact with the hole's singularity) are the only locally noticeable effects.
And it cites http://jila.colorado.edu/~ajsh/insidebh/schw.html which says:
Engulfed in blackness? NO! It is a common misconception that if you fall inside the horizon of a black hole you will be engulfed in blackness. More specifically, the story is that as you fall towards the horizon, the image of the sky above concentrates into a smaller and smaller circular patch, which disappears altogether as you pass through the horizon. The misconception arises because if you lower yourself very slowly towards the horizon, firing your rockets like crazy just to stay put, then indeed your view of the outside universe will be concentrated into a small, bright circle above you. Click on the button to see what it looks like if you lower yourself slowly to the horizon. Physically, this happens because you are swimming like crazy through the inrushing flow of space (see Waterfall), and relativistic beaming concentrates and brightens the scene ahead of (above) you. See 4D Perspective for a tutorial on relativistic beaming. But this is a thoroughly unrealistic situation. You'd be daft to waste your rockets hovering just above the horizon of a black hole. If you had all that rocket power, why not do something useful with it, like take a trip across the Universe? If you nevertheless insist on hovering just above the horizon, and if by mistake you drop just slightly inside the horizon, then you can no longer stay at rest, however hard you fire your rockets: the faster-than-light flow of space into the black hole will pull you in. Whatever you choose to do, the view of the outside Universe will not disappear as you pass through the horizon.
This explanation agrees with everything I know (when hovering outside the event horizon, you are accelerating instead of being in free fall).
Can you confirm that the Reddit post was incorrect, and Wikipedia and its cited link are correct?
Replies from: shminux↑ comment by Shmi (shminux) · 2012-06-09T05:11:06.276Z · LW(p) · GW(p)
The last two quotes are indeed correct, and the reddit one is a mix of true and false statements.
To begin with, the conclusion subtly replaces the original premise of arbitrarily high velocity with arbitrarily high acceleration. (Confusing velocity and acceleration is a Grade 10 science error.) Given that one cannot accelerate to or past the speed of light, near-infinite acceleration engine is indeed of no use inside a black hole. However, arbitrarily high velocity is a different matter. It lets you escape from inside a black hole horizon. Of course, going faster than light brings a host of other problems (and no, time travel is not one of them).
As you continue to fall, the event horizon opens up beneath you, so you feel as if you're descending into a featureless black bowl. Meanwhile, the stars become more and more crowded into a circular region of sky centered on the point immediately aft.
This is true if you hover above the horizon, but false if you fall freely. In the latter case you will see some distortion, but nothing as dramatic.
And then the point goes out. All at once, as if God turned off the switch.
This is false if you travel slower than light. You still see basically the same picture as outside, at least for a while longer.
If you have a magical FTL spaceship, what you see is not at all easy to describe. For example, in your own frame of reference, you don't have mass or energy, only velocity/momentum, the exact opposite of what we describe as being stationary. Moreover, any photon that hits you is perceived as having negative energy. Yet it does not give or take any of your own energy (you don't have any in your own frame), it "simply" changes your velocity.
I cannot comment on the Alice and Bob quote, as I did not find it in the link.
Actually, I can talk about black holes forever, feel free to ask.
Replies from: None↑ comment by [deleted] · 2012-06-09T07:57:30.347Z · LW(p) · GW(p)
The last two quotes are indeed correct, and the reddit one is a mix of true and false statements.
Awesome, thanks.
I cannot comment on the Alice and Bob quote, as I did not find it in the link.
I swear it was there, but now I can't find it either.
I'd be interested to hear your opinion of Gravity Probe B.
↑ comment by komponisto · 2012-06-09T23:34:06.527Z · LW(p) · GW(p)
I have a PhD in theoretical physics (General Relativity), and I'd be happy to help out with any questions in my area.
Excellent! That happens to be a subject I'm very interested in.
Here are two questions, to start:
1. Do you have a position in the philosophical debate about whether "general covariance" has a "physical" meaning, or is merely a property of the mathematical structure of the theory?
2. How can the following (from "Mach's Principle: Anti-Epiphenomenal Physics") be true:
[I]f the whole universe was rotating around you while you stood still, you would feel a centrifugal force from the incoming gravitational waves, corresponding exactly to the centripetal force of spinning your arms while the universe stood still around you.
given that it implies that the electromagnetic force (which is what causes your voluntary movements, such as "spinning your arms around") can be transformed into gravity by a change of coordinates? (Wouldn't that make GR itself the "unified field theory" that Einstein legendarily spent the last few decades of his life searching for, supposedly in vain?)
Replies from: shminux↑ comment by Shmi (shminux) · 2012-06-10T03:59:20.871Z · LW(p) · GW(p)
- Do you have a position in the philosophical debate about whether "general covariance" has a "physical" meaning, or is merely a property of the mathematical structure of the theory?
Yeah, I recall looking into this early in my grad studies. I eventually realized that the only content of it is diffeomorphism invariance, i.e. that one should be able to uniquely map tensor fields to spacetime points. The coordinate representation of these fields depends on the choice of coordinates, but the fields themselves do not. In that sense the principle simply states that the relation spacetime manifold -> tensor field is a function (surjective map). For example, there is a unique metric tensor at each spacetime point (which, incidentally, precludes traveling into one's past).
I would also like to mention that the debate "about whether "general covariance" has a "physical" meaning, or is merely a property of the mathematical structure of the theory" makes no sense to me as an instrumentalist (I consider the map-territory moniker an oft convenient model, not some deep ontological thing).
[I]f the whole universe was rotating around you while you stood still, you would feel a centrifugal force from the incoming gravitational waves, corresponding exactly to the centripetal force of spinning your arms while the universe stood still around you.
This is false, as far as I can tell. The frame dragging effect is not at all related to gravitational radiation. The Godel universe is an example of an extreme frame dragging due to being filled with spinning pressureless perfect fluid, and there are no gravitational waves in it.
it implies that the electromagnetic force (which is what causes your voluntary movements, such as "spinning your arms around") can be transformed into gravity by a change of coordinates?
Well, yeah, this is an absurd conclusion. The only thing GR says that matter creates spacetime curvature. A spinning spacetime has to correspond to spinning matter. And spinning is not relative, but quite absolute, it cannot be removed by a choice of coordinates (for example, the vorticity tensor does not vanish no matter what coordinates you pick). So Mach is out of luck here.
↑ comment by Cthulhoo · 2012-06-11T15:30:23.823Z · LW(p) · GW(p)
May I ask you which is exactly your (preferred) subfield of work? What are the most important open problems in that field that you think could receive decisive insight (both theoretically and experimentally) in the next 10 years?
Replies from: shminux↑ comment by Shmi (shminux) · 2012-06-11T19:14:55.745Z · LW(p) · GW(p)
May I ask you which is exactly your (preferred) subfield of work?
My research was in a sense Abbott-like: how a multi-dimensional world would look to someone living in the lower dimensions. It is different from the standard string-theoretical approach of bulk-vs-brain, because it is non-perturbative. I can certainly go into the details of it, but probably not in this comment.
What are the most important open problems in that field that you think could receive decisive insight (both theoretically and experimentally) in the next 10 years?
Caveat: I'm not in academia at this point, so take this with a grain of salt.
Dark energy (not to be confused with Dark matter) is a major outstanding theoretical problem in GR. As it happens, it is also an ultimate existential risk, because it limits the amount of matter available to humanity to "only" a few galaxies, due to the accelerating expansion of the universe. The current puzzle is not that dark energy exists, but why there is so little of it. A model that explains dark energy and makes new predictions might even earn the first ever Nobel prize in theoretical GR, if such predictions are validated.
That the expansion of the universe is accelerating is a relatively new discovery (1998), so there is a non-negligible chance that there will be new insights into the issue on a time frame of decades, rather than, say, centuries.
In observations/experiments, it is likely that gravitational waves will be finally detected. There is also a chance that Hawking radiation will be detected in a laboratory setting from dumb holes or other black-hole analogs.
Replies from: Cthulhoo↑ comment by Cthulhoo · 2012-06-12T08:12:26.230Z · LW(p) · GW(p)
My research was in a sense Abbott-like: how a multi-dimensional world would look to someone living in the lower dimensions. It is different from the standard string-theoretical approach of bulk-vs-brain, because it isnon-perturbative. I can certainly go into the details of it, but probably not in this comment.
This looks really interesting, any material you can suggest on the subject? I was a particle physics phenomenologist until last year, so proper introductory academic paper should be ok.
There is also a chance that Hawking radiation will be detected in a laboratory setting from dumb holes or other black-hole analogs.
And this looks very fascinating, too. Thanks a lot for your answers.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-06-12T14:54:58.464Z · LW(p) · GW(p)
One of the original papers, mostly the Killing reduction part. You can probably work your way through the citations to something you find interesting.
Replies from: Cthulhoo↑ comment by [deleted] · 2012-06-09T22:51:30.948Z · LW(p) · GW(p)
I've never understood how going faster can make time go slower, thereby explaining why light always appears to have the same velocity.
If I'm moving in the opposite direction to light, and if there was no time slowing down, then the light would appear to go faster than normal from my perspective. Add in the effects of time slowing down, and light appears to be going at the same speed it always does. No problem yet. But if I'm moving in the same direction as the light, and time doesn't slow down, then it would appear to be going slower than normally, so the slowing down of time should make it look even slower, not give it the speed we always observe it in.
What am I missing?
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2012-06-10T00:22:27.862Z · LW(p) · GW(p)
This Reddit comment giving a lay explanation for the constant lightspeed thing was linked around a lot a while ago. The very short version is to think of everything being only ever able to move at the exact single speed c in a four-dimensional space, so whenever something wants to have velocity along a space axis, they need to trade off some from along the the time axis to keep the total velocity vector magnitude unchanged.
Replies from: wedrifid, None↑ comment by wedrifid · 2012-06-10T00:48:10.810Z · LW(p) · GW(p)
The very short version is to think of everything being only ever able to move at the exact single speed c in a four-dimensional space, so whenever something wants to have velocity along a space axis, they need to trade off some from along the the time axis to keep the total velocity vector magnitude unchanged.
I like this way of thinking of it, so much simpler than the usual explanations.
↑ comment by [deleted] · 2012-06-10T01:18:20.772Z · LW(p) · GW(p)
That is a very good explanation for the workings of time, thank you very much for that.
But it doesn't answer my real question. I'll try to be a bit more clear.
Light is always observed at the same speed. I don't think I'm so crazy that I imagined reading this all over the place on the internet. The explanation given for this is that the faster I go, the more I slow down through time, so from my reference frame, light decelerates (or accelerates? I'm not sure, but it actually doesn't matter for my question, so if I'm wrong, just switch them around mentally as you read).
So let's say I'm going in a direction, let's call it "forward". If a ball is going "backward", then from my frame of reference, the ball would appear to go faster than it really is going, because its relative speed = its speed - my speed. This is also true for light, though the deceleration of time apparently counters that effect by making me observe it slower by the precise amount to make it still go at the same speed.
Now take this example again, but instead send the ball forward like me. From my frame of reference, the ball is going slower than it is in reality, again because its relative speed = its speed - my speed. The same would apply to light, but because time has slowed for me, so has the light from my perspective. But wait a second. Something isn't right here. If light has slowed down from my point of view because of the equation "relative speed = its speed - my speed", and time slowing down has also slowed it, then it should appear to be going slower than the speed of light. But it is in fact going precisely at the speed of light! This is a contradiction between the theory as I understand it and reality.
My god, that is probably extremely unclear. The number of times I use the words speed and time and synonyms... I wish I could use visual aids.
Also, I just thought of this, but how does light move through time if it's going at the speed of light? That would give it a velocity of zero in the futureward direction (given the explanation you have linked to), which would be very peculiar.
Anyway, thanks for your time.
Replies from: pragmatist, shminux, Risto_Saarelma, wedrifid↑ comment by pragmatist · 2012-06-11T05:17:43.967Z · LW(p) · GW(p)
The explanation given for this is that the faster I go, the more I slow down through time, so from my reference frame, light decelerates (or accelerates? I'm not sure, but it actually doesn't matter for my question, so if I'm wrong, just switch them around mentally as you read).
Perhaps I'm reading this wrong, but it seems you're assuming that time slowing down is an absolute, not a relative, effect. Do you think there is an absolute fact of the matter about how fast you're moving? If you do, then this is a big mistake. You only have a velocity relative to some reference frame.
If you don't think of velocity as absolute, what do you mean by statements like this one:
The same would apply to light, but because time has slowed for me, so has the light from my perspective.
There is no absolute fact of the matter about whether time has slowed for you. This is only true from certain perspectives. Crucially, it is not true from your own perspective. From your perspective, time always moves faster for you than it does for someone moving relative to you.
I really encourage you to read the first few chapters of this: http://www.pitt.edu/~jdnorton/teaching/HPS_0410/chapters/index.html
It is simply written and should clear up some of your confusions.
↑ comment by Shmi (shminux) · 2012-06-10T04:11:49.133Z · LW(p) · GW(p)
Maybe this angle will help: "relative speed = its speed - my speed" is an approximate equation. The true one is relative speed = (its speed - my speed)/(1-its speed * my speed / c^2). Let one of the two speeds = c, and the relative speed is also c.
Replies from: None↑ comment by [deleted] · 2012-06-10T11:59:13.467Z · LW(p) · GW(p)
Thanks for your answer, this equation will make it easier to explain my problem.
Let's say a ball is going at the speed of c/4, and I'm going at a speed of c/2. According to the approximate equation, before the effects of time slowing down are taken into account, I would be going at a speed of -c/4. Now if you take into account time slowing down (divide -c/4 by the (1-its speed*...)), you get a speed of -2c/7.
So that was the example when I'm going in the same direction as the ball. Now let's say the ball is still going at a speed of c/4, but I'm now going at a speed of -c/2. Using the approximate equation: 3c/4. Add in time slowing down: 2c/3.
So the two pairs are (-c/4, -2c/7) and (3c/4,2c/3). Let's compare these values.
For the first tuple, when I'm going in the same direction as the ball, -c/4 > -2c/7. This means that -2c/7 is a faster speed in the negative direction (multiply both sides by -1 and you get c/4<2c/7), so from the c/2 reference frame, after the time slow effect, the observed speed of the ball is greater than it would be without the time slow down. So far so good.
For the second tuple, however, when I'm going in the opposite direction of the ball, 3c/4 > 2c/3. So from the -c/2 reference frame, after the time slow effect, the ball appears to be going slower than it would if time didn't slow down.
But didn't the first tuple show that the ball is supposed to appear to go faster given the time slow effect? Does this mean that time slows down when I'm going in the same direction as the ball, and it accelerates when I'm going in the opposite direction of the ball? Or does it mean that the modification of the approximate equation which gives the correct one is not in fact the effects of time slowing down? Or am I off my rocker here?
Replies from: shminux↑ comment by Shmi (shminux) · 2012-06-10T20:58:57.843Z · LW(p) · GW(p)
This might be just a confusion between speed and velocity. In one case relative velocity (not speed), in fractions of the speed of light, is -1/4 (classically) vs -2/7 (relativity). In the other case it is 3/4 vs 2/3. In both cases the classical value is higher than the relativistic value.
Replies from: None↑ comment by [deleted] · 2012-06-11T01:11:46.677Z · LW(p) · GW(p)
That the classical value is always higher than the time-slowed value is precisely what doesn't make sense to me.
If -1/4 is the classical value, and -2/7 is the relativity value, -2/7 is a faster speed than -1/4, even though -1/4 is a bigger number. So the relativity speed is faster. However, if 3/4 is the classical value, and 2/3 is the relativity value, 3/4 is a faster speed relative to me than 2/3. So in this case, the classical speed is faster.
So when I have a speed of 1/2, time slowing down makes the relative speed of the ball greater. And when I have a speed of -1/2, time slowing down makes the relative speed of the ball smaller. More generally, this can be described by my direction relative to the ball. If I'm moving in the same direction as the ball, time slowing down makes it appear to go faster than the classical speed. However, if I'm going in the opposite direction of the ball, then it appears to go slower than the classical speed. And that doesn't make sense. Time slowing down should always make the ball appear to go faster than the classical speed, and the effects of time slowing down should definitely should not depend on my direction relative to the ball.
↑ comment by Risto_Saarelma · 2012-06-10T02:19:06.133Z · LW(p) · GW(p)
If light has slowed down from my point of view because of the equation "relative speed = its speed - my speed", and time slowing down has also slowed it, then it should appear to be going slower than the speed of light.
When your subjective time slows down, things around you seem to move faster relative to you, not slower. So your time slowing down would make the light seem to speed up for you.
↑ comment by wedrifid · 2012-06-10T01:24:58.217Z · LW(p) · GW(p)
Also, I just thought of this, but how does light move through time if it's going at the speed of light? That would give it a velocity of zero in the futureward direction (given the explanation you have linked to), which would be very peculiar.
That's right. From the point of view of the photon it is created and destroyed in the same instant.
Replies from: tgb, None↑ comment by tgb · 2012-06-10T01:47:31.188Z · LW(p) · GW(p)
To add to that, it is a relatively common classroom experiment to show trails in gas left by muons from cosmic radiation. These muons are travelling at about 99.94% of the speed of light, which is quite fast but the distance from the upper atmosphere where they originate to the classroom is long enough that it takes the muon several of its half-lives to reach the clasroom - by our measurement of time, at least. We should expect them to have decayed before the reach the classroom, but they don't!
By doing the same experiment at multiple elevations we can see that the rate of muon decay is much lower than non-relativistic theories would suggest. However, if time dilation due to their large speed is taken into account then we get that the muons 'experience' a much shorter trip from their point of view - sufficiently short that they don't decay! That they have reached the classroom is evidence (given a bunch of other knowledge about decay and formation of muons) that is easily observed for time dilation.
Also! Time dilation is surprisingly easy to derive. I recommend that you attempt to derive it yourself if you haven't already! I give you this starting point:
A) The speed of light is constant and independent of observers
B) A simple way to analyze time is to consider a very simple clock: two mirrors facing towards each other with a photon bouncing back and forth between the two. The cycles of the photon denotes the passage of time.
C) What if the clock is moving?
D) Draw a diagram
↑ comment by [deleted] · 2012-06-10T01:37:37.390Z · LW(p) · GW(p)
Okay, but if it's not moving through time, it only exists in the point in time in which it was created, no? So it would only be present for one moment in time where it would move constantly until it's destruction. We would therefore observe it as moving at infinite speed.
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2012-06-10T02:28:13.721Z · LW(p) · GW(p)
Remember the thing from the Reddit comment about everything always moving at the constant speed c. The photon has its velocity at a 90° angle from the time axis of space-time, but that's still just a velocity of magnitude c. Can't get infinite velocity because of the rule that you can't change your time-space speed ever.
Things get a bit confusing here, since the photon is not moving through time at all in its own frame of reference, but in the frame of reference of an outside observer, it's zipping around at speed c. Your intuition seems to be not including the bit about time working differently in different frames of reference.
Replies from: None↑ comment by [deleted] · 2012-06-10T12:09:24.813Z · LW(p) · GW(p)
Sorry if I'm being annoying, but the light is not moving through time. So it should not appear at different points in time. If I'm not moving forward, and you are, and you're looking directly to your side, then you'll only see me while I'm next to you. And if I start moving from side to side, then I won't impact you unless you're right next to me. Change "forward" with "futureward" and "side" to "space", and you get my problem with light having zero futureward speed.
My big assumption here is that even though things appear to behave differently from different frames of reference, there is in fact an absolute truth, an absolute way things are behaving. I don't think that's wrong, but if it is, I've got a long way to go before understanding relativity.
Replies from: bogdanb, Risto_Saarelma↑ comment by bogdanb · 2012-07-02T22:47:33.072Z · LW(p) · GW(p)
[...] but the light is not moving through time. So it should not appear at different points in time [...]
Since it’s not moving through time, light moves only through space. It never appears at different points in time. You can “see” this quite easily if you notice that you can’t encounter the same photon twice, even if you would have something that could detect its passing without changing it, unless you alter its path with mirrors or curved space, because you’d need to go faster than light to catch up with it after it passes you the first time.
In fact, if memory serves, in relativity two events are defined to be instantaneous if they are connected by a photon. For example, if a photon from your watch hits your eye and tells you it’s exactly 5 PM, and another photon hits your eye at the same time and tells you an atom decayed, then technically the atom decayed at exactly 5 PM. That is, in relativity, events happen exactly when you see them. On the other hand, the fact that two events are simultaneous for me may or may not (and usually aren’t) simultaneous for someone else, hence the word relativity.
(Even if you curve the photon, that just means that you pass twice through the same point in time. Think about it, if the photon can leave you and go back, it means you can see your “past you”, photons reflected off of your body into space and then coming back. Say the “loop” is three light-hours long. Since you can see the watch of the past you show 1PM at the same time you see your watch show 4PM, you simply conclude that the two events are simultaneous, from your point of view.)
I think what’s confusing is that we’re very often told things like “that star is N light years away, so since we’re seeing it now turning into a supernova, it happened N years ago”. That’s not quite a meaningless claim, but “ago” and “away” don’t quite mean the same thing they mean in relativistic equations. In relativity terms, for me it happened in 2012 because the events “I notice that the calendar shows 2012” and “the star blew up” are simultaneous from my point of view.
↑ comment by Risto_Saarelma · 2012-06-10T13:27:57.141Z · LW(p) · GW(p)
I don't have good offhand ideas how to unpack this further, sorry. I'd have to go learn Minkowski spacetime diagrams or something to have a proper idea how you get from timeward-perpendicular spaceward movement into the 45 degree light cone edge, and probably wouldn't end up with a very comprehensible explanation.
↑ comment by bogdanb · 2012-09-01T13:29:41.732Z · LW(p) · GW(p)
Final question: Could you please comment a bit on
http://lesswrong.com/lw/cwq/ask_an_experimental_physicist/7ba5 ?
↑ comment by bogdanb · 2012-09-01T13:27:26.989Z · LW(p) · GW(p)
Hi again shminux, this is my second question. First, I’m sorry if it’s going to be long-winded, I just don’t know enough to make it shorter :-)
It might be helpful if you can get your hands on the August 3 issue of Science (since you’re working at a university perhaps you can find one laying around), the article on page 536 is kind of the backdrop for my questions.
[Note: In the following, unless specified, there are no non-gravitational charges/fields/interactions, nor any quantum effects.]
(1) If I understand correctly, when two black holes merge the gravity waves radiated carry the complete information about (a) the masses of the two BHs, (b) their spins, (c) the relative alignment of the spins, and (d) the spin and momentum of the system, i.e. the exact positions and trajectories before (and implicitly during and after) the collision.
This seems to conflict with the “no-hair” theorem as well as with the “information loss” problem. (“Conflict” in the sense that I, personally can’t see how to reconcile the two.)
For instance, the various simulations I’ve seen of BH coalescence clearly show an event horizon that is obviously not characterized only by mass and spin. They quite clearly show a peanut-shape event horizon turning gradually into an ellipsoid. (With even more complicated shapes before, although there always seem to be simulation artifacts around the point where two EHs become one in every simulation I saw.) The two “lobes” of the “peanut EH” seem to indicate “clearly” that there are two point masses moving inside, which seems to contradict the statement that you can discern no structure through an EH.
(In jocular terms, I’m pretty sure one can set-up a very complex scenario involving millions of small black-holes coalescing with a big one with just the right starting positions that the EH actually is shaped like hair at some point during the multi-merger. I realize that’s abusing the words, but still, what is the “no-hair theorem” talking about, given that we can have EHs with pretty much arbitrary shape?)
In the same way, I don’t quite get the “information loss paradox” either. Take the simple scenario of an electron and a positron annihilating: in come two particles (coincidentally, they don’t have “hair” either), out come two photons, in other words a “pair” of electromagnetic waves. (Presumably, gravity waves would be generated as well, though since most physics seems to ignore those I presume I’m allowed to, as well.) There are differences, but the scenario seem very similar to black hole merger. Nobody seems to worry about any information loss in that case—basically, there isn’t, as all the information is carried by the leaving EM waves—so why exactly is it a problem with black holes? That is, what is the relevant difference?
[Note: if electrons and annihilation pose problems because of quantum effects, one can make up a completely classical scenario with similar behavior, using concepts no more silly than point masses and rigid rods. I just picked this example because it’s easy to express, and people actually think about it so “why don’t they worry about information loss” makes sense.]
(2) As far as I understand, exactly what happens in (1) also happens when something that is not a black hole falls into one. Take a particle (an object with small mass, small size but too low density to have an EH of its own, no internal structure other than the mass distribution inside it) falling spirally into a BH. AFAIK, this will generate almost exactly the same kind of gravitational waves that would be generated by an in-falling (micro-) black-hole with the same mass, with the only difference being that the waves will have slightly different shape because the density of the falling particle is lower (thus the mass distribution is slightly fuzzier).
Even though the falling particle doesn’t have an EH of its own, AFAIK the effects will be similar, i.e. the black hole’s EH will also form a small bump where the particle hits it, and will then oscillate a bit and radiate gravitational waves until it settles. Like in case (1) above, all the information regarding the particle’s mass and spin should be carried by the gross amplitude and phase of the waves, and the information about the precise shape of the particle (how its mass distribution differs from a point-mass like a micro–black hole) should be carried in the small details of the wave shapes (the tiny differences from how the waves would look if it were a micro–black hole that fell).
(3) Even better, if the particle and/or black hole also has electric charge, as far as I can tell the electro-magnetic field should also contain waves, similar to the electron/positron annihilation mentioned above, that carries all relevant information about electro-magnetic state of the particles before, during and after the “merger” (well, accretion in this case) in the same way the gravitational waves carry information about mass and spin.
So, as far as I can tell, coalescence and accretion seem to behave very similarly to other phenomena where information loss isn’t (AFAIK) regarded as an issue, and do so even when other forces than gravity are involved. In other words, it seems like all the information is not lost, it’s just “reflected” back into space. I’m not saying that it’s not an issue and all physicists are idiots, I’m just asking what is the difference.
(I have seen explanations of the information loss paradox that don’t cause my brain to raise these questions, but they’re all expressed in very different terms—entropy and the like—and I couldn’t manage to translate in “usual” terms. It’s a bit like using energy conservation to determine the final state of a complex mechanical system. I don’t contradict the results, I just want help figuring out in general terms what actually happens to reach that state.)
Replies from: shminux, Mitchell_Porter↑ comment by Shmi (shminux) · 2012-09-07T17:09:13.267Z · LW(p) · GW(p)
I'll quickly address the no-hair issue. The theorem states only that a single stationary electro-vacuum black hole in 3+1 dimensions can be completely described by just its mass, angular momentum and electric charge. It says nothing about non-stationary (i.e. evolving in time) black holes. After the dust settles and everything is emitted, the remaining black hole has "no hair". Furthermore, this is a result in classical GR, with no accounting for quantum effects, such as the Hawking radiation.
↑ comment by Mitchell_Porter · 2012-09-01T14:58:47.890Z · LW(p) · GW(p)
The information loss problem for black holes is a quantum issue. If the Hawking radiation produced during black hole evaporation were truly thermal, then that would mean that the details of the black hole's quantum state are being irreversibly lost, which would violate standard quantum time evolution. People now mostly think that the details of the state live on, in correlations in the Hawking radiation. But there are no microscopic models of a black hole which can show the mechanics of this. Even in string theory, where you can sometimes construct an exact description of a quantum black hole, e.g. as a collection of branes wrapped around the extra dimensions, with a gas of open strings attached to the branes, this still remains beyond reach.
Replies from: bogdanb↑ comment by bogdanb · 2012-09-02T07:44:42.776Z · LW(p) · GW(p)
If the Hawking radiation produced during black hole evaporation were truly thermal, then that would mean that the details of the black hole's quantum state are being irreversibly lost, which would violate standard quantum time evolution.
OK, I know that’s a quite different situation, but just to clarify: how is that resolved for other things that radiate “thermally”? E.g., say we’re dealing with a cooling white dwarf, or even a black and relatively cold piece of coal. I imagine that part of what it radiates is clearly not thermal, but is all radiation “not truly thermal” when looked at in quantum terms? Is the only relevant distinction the fact that you can discern its internal composition if you look close enough, and can express the “thermal” radiation as a statistic result of individual quantum state transitions?
From a somewhat different direction: if all details about the quantum state of the matter before it falls into the black hole are “reflected” back into the universe by gravitational/electromagnetic waves (basically, particles) during formation and accretion, what part of QM prevents the BH to have no state other than mass+spin+temperature?
In fact, I think the part that bothers me is that I’ve seen no QM treatment of BH that looks at the formation and accretion, they all seem to sort of start with an existing BH and somehow assume that the entropy of something thrown into the BH was captured by it. The relevant Wikipedia page starts by saying
The only way to satisfy the second law of thermodynamics is to admit that black holes have entropy. If black holes carried no entropy, it would be possible to violate the second law by throwing mass into the black hole.
But nobody seems to mention the entropy carried by the radiation released during accretion. I’m not saying they don’t, just that I’ve never seen it discussed at all. Which seems weird, since all (non-QM) treatments of accretion I’ve seen suggest (as I’m saying above) that a lot of information (and as far as I can tell, all of it) is actually radiated before the matter ever reaches the EH. To a layman it sounds like discussing the “cow-loss paradox” from a barn without walls...
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2012-09-02T09:14:03.987Z · LW(p) · GW(p)
how is that resolved for other things that radiate “thermally”?
For something other than a black hole, quantum field theory provides a fundamental description of everything that happens, and yes, you could track the time evolution for an individual quantum state and see that the end result is not truly thermal in its details.
But Hawking evaporation lacked a microscopic description. Lots of matter falls into a small spatial volume; an event horizon forms. Inside the horizon, everything just keeps falling together and collapses into a singularity. Outside the horizon, over long periods of time the horizon shrinks away to nothing as Hawking radiation leaks out. But you only have a semiclassical description of the latter process.
The best candidate explanation is the "fuzzball" theory, which says that singularities, and even event horizons, do not exist in individual quantum states. A "black hole" is actually a big ball of string which extends out to where the event horizon is located in the classical theory. This ball of string has a temperature, its parts are in motion, and they can eventually shake loose and radiate away. But the phase space of a fuzzball is huge, which is why it has a high entropy, and why it takes exponentially long for the fuzzball to get into a state in which one part is moving violently enough to be ejected.
That's the concept, and there's been steady progress in realizing the concept. For example, this paper describes Hawking radiation from a specific fuzzball state. One thing about black hole calculations in string theory is that they reproduce semiclassical predictions for a quantum black hole in very technical ways. You'll have all the extra fields that come with string theory, all the details of a particular black hole in a particular string vacuum, lots of algebra, and then you get back the result that you expected semiclassically. The fact that hard complicated calculations give you what you expect suggests that there is some truth here, but there also seems to be some further insight lacking, which would compactly explain why they work.
Here's a talk about fuzzballs.
nobody seems to mention the entropy carried by the radiation released during accretion
The entropy of the collapsing object jumps enormously once the event horizon forms. Any entropy lost before that is just a detail.
From a string-theory perspective, the explanation of the jump in entropy would be something like this: In string theory, you have branes, and then strings between branes. Suppose you have a collection of point-branes ("D0-branes") which are all far apart in space. In principle, string modes exist connecting any two of these branes, but in practice, the energy required to excite the long-range connections is enormous, so the only fluctuations of any significance will be strings that start and end on the same brane.
However, once the 0-branes are all close to each other, the energy required to excite an inter-brane string mode becomes much less. Energy can now move into these formerly unoccupied modes, so instead of having just N possibilities (N the number of branes), you now have N^2 (a string can start on any brane and end on any other brane). The number of dynamically accessible states increases dramatically, and thus so does the entropy.
Replies from: bogdanb↑ comment by bogdanb · 2012-09-02T10:39:51.761Z · LW(p) · GW(p)
nobody seems to mention the entropy carried by the radiation released during accretion
The entropy of the collapsing object jumps enormously once the event horizon forms. Any entropy lost before that is just a detail.
OK, that’s the part that gives me trouble. Could you point me towards something with more details about this jump? That is, how it was deduced that the entropy rises, that it is big rise, and that the radiation before it is negligible? An explanation would be nice (something like a manual), but even a technical paper will probably help me a lot (at least to learn what questions to ask). A list of a dozen incremental results—which is all I could find with my limited technical vocabulary—would help much less, I don’t think I could follow the implications between them well enough.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2012-09-02T11:31:21.640Z · LW(p) · GW(p)
The conclusion comes from combining a standard entropy calculation for a star, and a standard entropy calculation for a black hole. I can't find a good example where they are worked through together, but the last page here provides an example. Treat the sun as an ideal gas, and its entropy is proportional to the number of particles, so it's ~ 10^57. Entropy of a solar-mass black hole is the square of solar mass in units of Planck mass, so it's ~ 10^76. So when a star becomes a black hole, its entropy jumps by about 10^20.
What's lacking is a common theoretical framework for both calculations. The calculation of stellar entropy comes from standard thermodynamics, the calculation of black hole entropy comes from study of event horizon properties in general relativity. To unify the two, you would need to have a common stat-mech framework in which the star and the black hole were just two thermodynamic phases of the same system. You can try to do that in string theory but it's still a long way from real-world physics.
For what I was saying about 0-branes, try this. The "tachyon instability" is the point at which the inter-brane modes come to life.
↑ comment by bogdanb · 2012-09-01T11:59:09.653Z · LW(p) · GW(p)
Hi shminux, thanks for your offer!
I have some black hole questions I’ve been struggling with for a week (well, years actually, I just thought about it more than usual during the last week or so) that I couldn’t find a satisfactory explanation for. I don’t think I’m asking about really unknown things, rather all explanations I see are either pop-sci explanations that don’t go deep enough, or detailed descriptions in terms of tensor equations that are too deep for what math I remember from university. I’m hoping that you could hit closer to the sweet spot :-)
I’ll split this into two comments to simplify threading. This first one is sort of a meta question:
Take for instance FIG. 1 from http://arxiv.org/pdf/1012.4869v2.pdf or the video at http://www.sciencemag.org/content/suppl/2012/08/02/337.6094.536.DC1/1225474-s1.avi
I think I understand the what of the image. What I don’t quite get is the when and where of the thing.
That is, given that time and space bend in weird and wonderful ways around the black holes, and more importantly, they bends differently at different spots around them, what exactly are the X, Y and Z coordinates that are projected to the image plane (and, in the case of the video, the T coordinate that is “projected” on the duration of the video), given that the object in the image(s) is supposed to display the shape of time and space?
The closest I got trying to find answers:
(1) I saw Penrose diagrams of matter falling into a black hole, though I couldn’t find one of merging black holes. I couldn’t manage to imagine what one would look like, and I’m not quite sure it makes sense to ask for one: Since the X coordinate in a Penrose diagram is supposed to be distance from the singularity, I don’t see how you can put two of those, closing to each other, in one picture. Also, my brain knotted itself when trying to imagine more than one “spot” where space turns into time, interacting. On the other hand, that does look a bit like the coalescence simulations I’ve seen, so I might not be that far from the truth.
(2) I suppose the images might be space-like slices through the event, perhaps separated by equal time-like intervals at infinity in the case of the video. I don’t want to speculate more, in case I’m really far from the mark, so I’ll wait for an answer first.
(In case it helps with the answer: I do know what an integral is (including path, surface, and volume integrals), though I probably can’t do much with a complicated one mathematically. Similarly for derivatives, gradient, curl and divergence, though I have to think quite carefully to interpret the last two. If you say “manifold” and don’t have a good picture my eyes tend to glaze over, though. I sort of understand space curvature and frame-dragging, when they’re not too “sharp”, qualitatively if not quantitatively. I can visualize either of them—again, as long as they’re not “sharp” enough to completely reverse space and time dimensions; i.e., I have an approximate idea of what happens when you’re close to an event horizon, but not what goes on as you “cross” one. (Actually, I’m not sure I understand what “crossing an EH” means, again it’s the “when” and “where” the seem to be the trouble rather than the “what”; most simple explanations tend to indicate that there’s not much of a “what”, as in “nothing much happens as you cross one that doesn’t happen just before or just after”.) I can’t quite visualize a general tensor field, but when you split the Riemann tensor into tidal and frame-dragging components I can interpret the tendex and vortex lines on a well-drawn diagram if I think carefully.)
Replies from: shminux↑ comment by Shmi (shminux) · 2012-09-01T20:49:14.237Z · LW(p) · GW(p)
I saw Penrose diagrams of matter falling into a black hole, though I couldn’t find one of merging black holes.
I'll try to draw one and post it, might take some time, given that you need more dimensions than just 1 space + 1 time on the original Penrose diagram, because you lose spherical symmetry. The head-on collision process still retains cylindrical symmetry, so a 2+1 picture should do it, represented by a 3D Penrose diagram, which is going to take some work.
Replies from: bogdanb↑ comment by A1987dM (army1987) · 2012-06-09T17:26:54.472Z · LW(p) · GW(p)
See the end of the second-last paragraph of this.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-06-09T18:47:31.344Z · LW(p) · GW(p)
Now, if the Sun gets lighter, the planets do drift away so they have more (i.e. less negative) potential energy, but this is compensated by the kinetic energy of particles escaping the Sun... or something.
That's right. The total energy of Sun+planets+escaped matter is classically conserved. Fortunately, the velocities and gravitational fields are small enough for the Newtonian gravity to be a very good approximation, so there are no relativistic complications.
I'm not an expert in general relativity, and I hear that it's non-trivial to define the total energy of a system when gravity is non-negligible, but the local conservation of energy and momentum does still apply.
That's true, the total energy in GR is only defined for a system with an "asymptotic time translation symmetry", but most isolated systems are like that (what happens far away from massive objects is not significantly affected by the details of the orbital motion and such). There is a marginal quality wiki article on the subject.
comment by Mitchell_Porter · 2012-06-09T04:28:46.815Z · LW(p) · GW(p)
Rolf's PhD. Look for the reference to the robot uprising...
comment by James_Miller · 2012-06-09T06:20:14.357Z · LW(p) · GW(p)
How good of an understanding of physics is it possible to acquire if you read popular books such as Greene's but never look at the serious math of physics. Is there lots of stuff in the math that can't be conveyed with mere words, simple equations and graphs?
Replies from: RolfAndreassen, Douglas_Knight↑ comment by RolfAndreassen · 2012-06-09T20:08:42.958Z · LW(p) · GW(p)
I guess it depends on what you mean by 'understanding'. I personally feel that you haven't really grasped the math if you've never used it to solve an actual problem - textbook will do, but ideally something not designed for solvability. There's a certain hard-to-convey Fingerspitzggefühl, intuition, feel-for-the-problem-domain - whatever you want to call it - that comes only with long practice. It's similar to debugging computer programs, which is a somewhat separate skill from writing them; I talk about it in some detail in this podcast and these slides.
That said, I would say you can get quite a good overview without any math; you can understand physics in the same sense I understand evolutionary biology - I know the basic principles but not the details that make up the daily work of scientists in the field.
Replies from: satt↑ comment by satt · 2012-06-09T20:47:36.852Z · LW(p) · GW(p)
Podcast & slide links point to the same lecture9.pdf file, BTW.
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-09T22:46:35.313Z · LW(p) · GW(p)
Thanks, edited.
↑ comment by Douglas_Knight · 2012-06-09T17:03:39.915Z · LW(p) · GW(p)
Those two questions are completely unrelated. Popular physics books just aren't trying to convey any physics. That is their handicap, not the math. Greene could teach you a lot of physics without using math, if he tried. But there's no audience for such books.
Eliezer's quantum physics sequence impressed me with its attempt to avoid math, but it seems to have failed pretty badly.
Replies from: army1987, James_Miller, itaibn0↑ comment by A1987dM (army1987) · 2012-06-09T22:24:43.629Z · LW(p) · GW(p)
QED by Feynman is an awesome attempt to explain advanced physics without any maths. (But it was in origin a series of lectures, made into a book at a later time.)
One of the things that irked me about Penrose's The Road to Reality is that he didn't seem to make up his mind about who his audience was supposed to be, as he first painstakingly explains certain concepts that should be familiar to high-school seniors, and then he discusses topics that even graduate physics students (e.g. myself) would have difficulties with. But then I remembered that I aimed for exactly the same thing in the Wikipedia articles I edited, because if the whole article is aimed at a very specific audience i.e. physics sophomores (as a textbook would) then whoever is at a lower ‘level’ would understand little of it and whoever is at a higher level would find little they didn't already know, whereas making the text more and more advanced as the article progresses makes each reader find something at the right level for them.
↑ comment by James_Miller · 2012-06-09T17:57:20.685Z · LW(p) · GW(p)
it seems to have failed pretty badly.
Why?
Replies from: TimS, private_messaging↑ comment by TimS · 2012-06-11T11:29:50.971Z · LW(p) · GW(p)
The point of the quantum mechanics sequence was the contrast between Rationality and Empiricism. By writing at least 2/3 of the text about quantum mechanics, Eliezer obscured this point in order to pick an unnecessary fight about the proper interpretation of particular experimental results in physics.
Even now, it is unclear whether he won that fight, and that counts as a failure because MWI vs. Copenhagen was supposed to be a case study of the larger point about the advantages of Rationality over Empiricism, not the main thing to be debated.
↑ comment by private_messaging · 2012-06-11T06:57:05.186Z · LW(p) · GW(p)
The one time he did math (interferometer example) he got phases wrong, probably as result of confusing phase of 180 with i , and who knows what other misunderstandings (wouldn't bet money he understood phase at all). The worst sort of popularization is where the author doesn't even know the topic first-hand (i.e. mathematically).
Even worse is this idiot idea above in this thread that you can evaluate someone else's strength as rationalist or something by seeing if they agree with your opinion on a topic you very, very poorly understand, not even well enough to get any math right. Big chunk of 'rationalism' here is plain dilettantism, the worst form of. The belief you don't need to know any subtleties to make opinions. The belief that those opinions for which you didn't need to know subtleties do matter (they usually don't). The EY has excuse with MWI - afaik he had personal loss at the time, and MWI is very comforting. Others here have no such excuse.
edit: i guess 5 people want an explanation what was wrong ? Another link. There's several others. QM sequence is the very best example of what popularizations shouldn't be like, or how a rational person shouldn't think about physics. If you can't get elementary shit right, shut up on philosophy you are not being rational, simply making mistakes. Purely Bayesian belief updates don't matter if you update wrong things given evidence.
↑ comment by itaibn0 · 2012-06-10T19:45:21.322Z · LW(p) · GW(p)
You and amy1987 responding seem to think that math is the same thing as formulas. While there is a lot that can be done without formulas, physics is impossible without math. For instance, to understand spin one needs to understand representation theory. amy1987 mentioned QED. Well, QED certainly does have math. It presents complex numbers and path integrals and the stationary phase approximation. Math is just thinking that is absolutely and completely precise.
ADDED: I forgot to take the statements I reference in their context: responding to James_Miller. He clearly used 'math' to mean what appears in math textbooks. This makes my criticism invalid. I'm sorry.
Replies from: Douglas_Knight, army1987↑ comment by Douglas_Knight · 2012-06-10T20:11:17.699Z · LW(p) · GW(p)
You make several contradictory claims and I disagree with all of them.
Replies from: itaibn0↑ comment by A1987dM (army1987) · 2012-06-10T20:12:16.438Z · LW(p) · GW(p)
From the context, I guess that was not what James_Miller meant.
comment by dspeyer · 2012-06-09T05:18:14.413Z · LW(p) · GW(p)
How viable do you think neutrino-based communication would be? It's one of the few things that could notably cut nyc<->tokyo latency, and it would completely kill blackout zones. I realize current emitters and detectors are huge, expensive and high-energy, but I don't have a sense of how fundamental those problems are.
Replies from: RolfAndreassen, RolfAndreassen↑ comment by RolfAndreassen · 2012-06-09T06:04:12.011Z · LW(p) · GW(p)
I don't think it's going to be practical this century. The difficulty is that the same properties that let you cut the latency are the ones that make the detectors huge: Neutrinos go right through the Earth, and also right through your detector. There's really no way around this short of building the detector from unobtainium, because neutrinos interact only through the weak force, and there's a reason it's called 'weak'. The probability of a neutrino interacting with any given five meters of your detector material is really tiny, so you need a lot of them, or a huge and very dense detector, or both. Then, you can't modulate the beam; it's not an electromagnetic wave, there's no frequency or amplitude. (Well, to be strictly accurate, there is, in that neutrinos are quantum particles and therefore of course are also waves, as it were. But the relevant wavelength is so small that it's not useful; you can't build an antenna for it. For engineering purposes you really cannot model it as anything but a burst of particles, which has intensity but not amplitude.) So you're limited to Morse code or similar. Hence you lose in bandwidth what you gain in latency. Additionally, neutrinos are hard to produce in any numbers at a precise moment. You're relying on muon decays, which of course are a fundamentally random process. So the variables you're actually controlling are the direction and intensity of your muon beam, and at respectable fractions of lightspeed you just can't turn them around on a dime. Plus you get the occasional magnet quench or whatnot, and lose the beam and have to spend five minutes building it up again. So, not only are you limited to dots and dashes, you can't even generate them fast and reliably.
All that said, what application other than finance really needs better latency than you get by going at lightspeed through orbit? And while it's true that people would make money off that, I don't see any particular social return to it. Liquidity is a fine thing, but I cannot fathom that it matters to have it on millisecond scales - seconds should be just fine, and we're already way beyond that just with lightspeed the long way around. As for blackout zones, are you thinking of cellphones? I suggest that this is a bad idea. To get a reliable signal in a man-portable detector you would have to have a very intense neutrino burst indeed; and then you'd also get a reliable signal in the body of the guy holding it. We detect neutrinos by the secondary radiation they cause. I haven't worked the numbers, but even if cancers were rare enough to put up with, think of the lawsuits.
Replies from: Alicorn, kilobug, epigeios↑ comment by Alicorn · 2012-06-09T06:10:28.303Z · LW(p) · GW(p)
I like this comment because it is full of sentence structures I can follow about topics I know nothing about. I write a lot of thaumobabble and I try to make it sound roughly like this, except about magic.
Replies from: Nornagest, Bugmaster↑ comment by Bugmaster · 2012-06-11T22:07:02.164Z · LW(p) · GW(p)
Where can I read some of your best thaumobabble ? In addition to the Luminosity books, I mean; I'd read those.
I do enjoy me some fine vintage thaumobabble.
Replies from: Alicorn↑ comment by Alicorn · 2012-06-11T22:54:37.102Z · LW(p) · GW(p)
My thaumobabble is mostly in Elcenia. If you're only looking for thaumobabble samples and don't have any interest in the story, you might want to skip around to look at mentions of the name "Kaylo", because he does it a lot.
Replies from: Bugmaster↑ comment by kilobug · 2012-06-09T14:38:07.848Z · LW(p) · GW(p)
All that said, what application other than finance really needs better latency than you get by going at lightspeed through orbit?
Through orbit is very bad for low latency. Lowest latency is through undersea optical fiber with modern technology, and that gives around 100ms round-trip for New York-Tokyo (according to Wolfram Alpha), at best. So probably around 150ms in real life conditions, with routing and not taking exactly the most straight path. Which isn't that great.
As a geek, my first though is : ssh ! ;) Starting at 100ms and above, the ssh experience starts to feel laggy, you don't have instantanous-feeling reaction when you move the cursor around, which is not pleasant.
More realistically : everything that is "real-time" : phone/voip/video conferencing, real-time gaming like RTS or FPS, maybe even remote-controlled surgery (not my field of expertise, so not sure for that).
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-09T19:42:45.714Z · LW(p) · GW(p)
My experience with games across the Pacific is that the timezone coordination is much more an issue than latency, but then again I don't play twitch games. So, I take your point, but I really do not see neutrinos solving the problem. If I were an engineer with a gun held to my head I would rather think in terms of digging a tunnel through the crust and passing ordinary photons through it!
↑ comment by epigeios · 2012-06-12T03:56:35.484Z · LW(p) · GW(p)
Wait wait wait. A muon beam exists? How does that work? How accurate is it? Does it only shoot out muons, or does it also shoot out other particles?
Replies from: RolfAndreassen, Dreaded_Anomaly↑ comment by RolfAndreassen · 2012-06-12T04:15:33.199Z · LW(p) · GW(p)
Well, for values of 'exist' equal to "within vast particle accelerators". You produce muons by a rather complicated process: First you send a proton beam at graphite, which produces kaons and pions. You focus these beams using magnetic fields, and they decay to muons. Muons are relatively long-lived, so you guide them into a circular storage ring. They decay to a muon neutrino, an electron anti-neutrino, and an electron.
I'm not sure whether accuracy is a good question in these circumstances. Our control of the muons is good enough to manipulate them as described above, and we're talking centimeter distances at quite good approximations to lightspeed, but it's not as though we care about the ones that miss, except to note that you don't go into the tunnel when the beam is active.
You do get quite a lot of other particles, but they don't have the right mass and momentum combinations for the magnets to guide them exactly into the ring, so they end up slightly increasing the radiation around the production apparatus.
The above is for the Gran Sasso experiment; there may be other specific paths to muon beams, but the general method of starting with protons, electrons, or some other easily accessible particle and focusing the products of collisions is general. Of course this means you can't get anywhere near the luminosity of the primary beams, since there's a huge loss at each conversion-and-focusing.
↑ comment by Dreaded_Anomaly · 2012-06-12T09:45:59.258Z · LW(p) · GW(p)
There is actually some research being done into the creation of a muon collider.
↑ comment by RolfAndreassen · 2012-06-18T17:27:55.755Z · LW(p) · GW(p)
Here's another article saying basically the same thing I say below, but with extra flair.
comment by [deleted] · 2012-06-09T22:44:11.583Z · LW(p) · GW(p)
I have three pretty significant questions: Are you a strong rationalist (good with the formalisms of Occams Razor)? Are you at all familiar with String Theory (in the sense of Doing the basic equations)? If yes to both, what is your bayes goggles view on String Theory?
What on earth is the String Theory controversy about, and is it resolvable at a glance like QM's MWI?
Replies from: Mitchell_Porter, RolfAndreassen, shminux↑ comment by Mitchell_Porter · 2012-06-10T11:03:06.083Z · LW(p) · GW(p)
There isn't a unified "string theory controversy".
The battle-tested part of fundamental physics consists of one big intricate quantum field theory (the standard model, with all the quarks, leptons etc) and one non-quantum theory of gravity (general relativity). To go deeper, one wishes to explain the properties of the standard model (why those particles and those forces, why various "accidental symmetries" etc), and also to find a quantum theory of gravity. String theory is supposed to do both of these, but it also gets attacked on both fronts.
Rather than producing a unique prediction for the geometry of the extra dimensions, leading to unique and thus sharply falsifiable predictions for the particles and forces, present-day string theory can be defined on an enormous, possibly infinite number of backgrounds. And even with this enormous range of vacua to choose from, it's still considered an achievement just to find something with a qualitative resemblance to the standard model. Computing e.g. the exact mass of the "electron" in one of these stringy standard models is still out of reach.
Here is a random example of a relatively recent work of string phenomenology, to give you an idea of what is considered progress. The abstract starts by saying that certain vacua are known which give rise to "the exact MSSM spectrum". The MSSM is the standard model plus minimal supersymmetry. Then they point out that these vacua will also have to have an extra electromagnetism-like force ("gauged U(1)_B-L"). We don't see such a force, so therefore the "B-L" photons must be heavy, and the gist of the paper is to point out that this can be achieved if one of the neutrino superpartners acts like a Higgs field (by "acquiring a vacuum expectation value"). In fact this paper doesn't contain string calculations per se; it's an argument at the level of quantum field theory, that the field-theory limit of these string models is potentially consistent with experiment.
That might not sound exciting, but in fact it's characteristic, not just of string phenomenology, but of theoretical particle physics in general. Progress is incremental. Grand unified theories don't explain the masses of the particles, but they can explain the charges. String theory hasn't yet explained the masses, but it has the potential to do so, in that they will be set by the stabilized size and shape of the extra dimensions. The topology of the extra dimensions is (currently) a model-building choice, but once that choice is made, the masses should follow, they're not free parameters as in field theory.
As for what might determine the topology of the extra dimensions, anthropic selection is a popular answer these days - and that has become another source of dissatisfaction for string theory's critics, because it looks like another step back from predictivity. Except in very special cases like the cosmological constant, where a large value makes any kind of physical structure impossible, there's enormous scope for handwaving explanations here... Actually, there are arguments that the different vacua of the "landscape" should be connected by quantum tunneling, so the vacuum we are in may be a long-lived metastable vacuum arrived at after many transitions in the primordial universe. But even if that's true, it doesn't tell you whether the number of metastable minima in the landscape is one or a googol. This is an aspect of string theory which is even harder than calculating the particle masses in a particular vacuum, judging by the amount of attention it gets. The empirical side of string theory is still dominated by incrementally refining the level of qualitative approximation to the standard model (including the standard cosmological model, "lambda CDM") that is possible.
As for quantum gravity, the situation is somewhat different. String theory offers a particular solution to the problems of quantum gravity, like accounting for black hole entropy, preserving unitarity during Hawking evaporation, and making graviton behavior calculable. I'd say it is technically far ahead of any rival quantum gravity theory, but none of that stuff is observable. So approaches to quantum gravity which are much less impressive, but also much simpler, continue to have supporters.
Replies from: None↑ comment by RolfAndreassen · 2012-06-10T03:55:11.068Z · LW(p) · GW(p)
I don't do formal Bayes or Kolmogorov on a daily basis; in particle physics Bayes usually appears in deriving confidence limits. Still, I'm reasonably familiar with the formalism. As for string theory, my jest in the OP is quite accurate: I dunno nuffin'. I do have some friends who do string-theoretical calculations, but I've never been able to shake out an answer to the question of what, exactly, they're calculating. My basic view of string theory has remained unchanged for several years: Come back when you have experimental predictions in an energy or luminosity range we'll actually reach in the next decade or two. Kthxbye.
The controversy is, I suppose, that there's a bunch of very excited theorists who have found all these problems they can sic their grad students on, problems which are hard enough to be interesting but still solvable in a few years of work; but they haven't found any way of making, y'know, actual predictions of what will happen in current or planned experiments if their theory is correct. So the question is, is this a waste of perfectly good brains that ought to be doing something useful? The answer seems to me to be a value judgement, so I don't think you can resolve it at a glance.
Replies from: None↑ comment by Shmi (shminux) · 2012-06-10T04:32:57.123Z · LW(p) · GW(p)
What on earth is the String Theory controversy about, and is it resolvable at a glance like QM's MWI?
I wonder how you resolve the MWI "at a glance". There are strong opinions on both sides, and no convincing (to the other side) argument to resolve the disagreement. (This statement is an indisputable experimental fact.) If you mean that you are convinced by the arguments from your own camp, then I doubt that it counts as a resolution.
Also, the Occam's razor is nearly always used by physicists informally, not calculationally (partly because Kolmogorov complexity is not computable).
As for the string theory, I don't know how to use Bayes to evaluate it. On one hand, this model gives some hope of eventually finding something workable, since it provided a number of tantalizing hints, such as the holographic principle and various dualities. On the other hand, every testable prediction it has ever made has been successfully falsified. Unfortunately, there are few other competing theories. My guess is that if something better comes along, it will yield the string theory in some approximation.
Replies from: wedrifid, None↑ comment by wedrifid · 2012-06-10T08:17:45.180Z · LW(p) · GW(p)
I wonder how you resolve the MWI "at a glance". There are strong opinions on both sides, and no convincing (to the other side) argument to resolve the disagreement. (This statement is an indisputable experimental fact.) If you mean that you are convinced by the arguments from your own camp, then I doubt that it counts as a resolution.
MagnetoHydroDynamics may find this most useful as an answer to his first question rather than to his question about string theory. It gives him significant information about your rationalist strengths and ability to apply Occams Razor usefully. To use the language above we could describe this in terms of 'camps'. Magneto can identify you as not part of his desired camp and correctly use that to determine how much weight to place on your testimony in other areas. (Not belonging to his 'camp' you would naturally either disagree or take offence at his disrespect).
Replies from: private_messaging, None↑ comment by private_messaging · 2012-06-11T06:44:56.417Z · LW(p) · GW(p)
Evaluating 'rationalist strengths' via answers to questions about physics you don't actually know well enough to evaluate anything, is also a very effective way to be stupid and reveal your own ignorance of QM.
↑ comment by [deleted] · 2012-06-10T12:21:25.854Z · LW(p) · GW(p)
As wedrifid says, this comment tells me that you are, regrettably, not as strong a Bayesian as I would wish many physicists were.
Resolving MWI at a glance involves looking at the Schroedinger equation and conclude that it gives rise to decoherence and that when decoherence gets large enough, the Schroedinger equation says nothing anormalous happens, it just keeps on being decoherent.
That is literally the whole of the argument, and to me saying that something extra and mysterious happens is stupid in an absolute sense. Run a two-particle, single-spatial dimension, time dependent sim of the Schroedinger equation, starting with a high level of quantum independence, and you will see decoherence as plain as day.
Decoherence is simple, falsifiable, and explains all hitherto observed data.
The Collapse postulate breaks CPT symmetry, violates conservation the quantum hamiltonian, violates Liouvilles theorem, violates relativistic locality, is non-linear, is non-unitary, is non-differentiable, inherently stochastic, poorly defined, anthropocentric and formulated in deep confusion.
Pick your side.
Replies from: Vaniver, Mitchell_Porter, None, shminux↑ comment by Vaniver · 2012-06-10T19:09:47.439Z · LW(p) · GW(p)
As wedrifid says, this comment tells me that you are, regrettably, not as strong a Bayesian as I would wish many physicists were.
And your comment makes obvious that you are not a physicist, and have learned QM from someone who is not a physicist. Quick, without looking it up- what percentage of physicists subscribe to MWI? What are two alternative interpretations of QM besides Copenhagen and MWI?
Pick your side.
This is entirely the wrong attitude to have.
Replies from: army1987, None↑ comment by A1987dM (army1987) · 2012-06-11T17:40:00.282Z · LW(p) · GW(p)
Quick, without looking it up- what percentage of physicists subscribe to MWI?
I'm a physicist and I wouldn't know that myself. Especially because I seem to recall different surveys giving vastly different results.
Replies from: Vaniver↑ comment by Vaniver · 2012-06-11T17:57:44.067Z · LW(p) · GW(p)
Sure! The approach is the informative part, and I should have worded my post better to make that clearer. Something along the lines of "why do you believe that many physicists reject MWI for those reasons?" would have been less confrontational and probably more communicative.
↑ comment by [deleted] · 2012-06-10T20:57:02.278Z · LW(p) · GW(p)
And your comment makes obvious that you are not a physicist, and have learned QM from someone who is not a physicist.
Yes I am not a physicist, I will at best be a first year CS bachelor student in a little over a year from time of posting. I am, however, really good at mathematics. Good enough in fact to be able to solve partial differential equations in complex scalar fields, and simulate them accordingly with custom written C programs as a hobby.
I might not know the first equation of QTF, but I can write a Schroedinger-wave-packet equation, derive the time dependent differentialtion, discretize it and simulate it in a two dimensional discrete hilbert space with dependent potential wells and set the initial state to high independence.
Quick, without looking it up- what percentage of physicists subscribe to MWI?
I don't know, but apparently not enough. I seem to misremember having heard a figure of more than half.
What are two alternative interpretations of QM besides Copenhagen and MWI?
Transitional Wave and Bhomian Mechanics, or however they are spelt. The former is something tricky to do with time-reversed wave-packets, the latter postulates point-shaped particles in addition to the wave packets and was disproven early on.
This is entirely the wrong attitude to have.
Yes it is, I am sorry, it was a rethorical slip up.
↑ comment by Mitchell_Porter · 2012-06-10T22:30:06.237Z · LW(p) · GW(p)
If you could look at the wavefunction and count the worlds by inspection, then these claims would have something to them. But you can't. By inspection you can see, e.g., that a particular wavefunction contains two wavepackets, one of which is N times as high as the other. How do you go from that, to one outcome being N^2 times as frequent as the other?
Replies from: None↑ comment by [deleted] · 2012-06-10T22:36:56.025Z · LW(p) · GW(p)
You generally don't. If I may in retrospect reword my argument, I will say that given the Schroedinger Equation there is nothing stopping decoherence from getting macroscopic. Why the Born Rule works, I have no idea, but I am pretty damn certain it has a non-mysterious explanation.
It seems I was confused about what terms were synonymous and what weren't.
But The Copenhagen Interprentation is still stupid.
↑ comment by Shmi (shminux) · 2012-06-10T18:19:43.710Z · LW(p) · GW(p)
Run a two-particle, single-spatial dimension, time dependent sim of the Schroedinger equation, starting with a high level of quantum independence, and you will see decoherence as plain as day.
Please feel free to post a link to such a sim. I'm almost willing to bet real money against it. That you would even propose that decoherence can be observed without including the environment in the simulation tells me how much of QM you really understand.
The Collapse postulate breaks CPT symmetry, violates conservation the quantum hamiltonian, violates Liouvilles theorem, violates relativistic locality, is non-linear, is non-unitary, is non-differentiable, inherently stochastic, poorly defined, anthropocentric and formulated in deep confusion.
You mean, the straw collapse EY constructed and happily demolished. The windmill you are fighting has nothing to do with the orthodox formulation of QM, which is perfectly compatible with decoherence.
Replies from: None, Micaiah_Chang↑ comment by [deleted] · 2012-06-10T20:45:24.300Z · LW(p) · GW(p)
The windmill you are fighting has nothing to do with the orthodox formulation of QM, which is perfectly compatible with decoherence.
What is the orthodox formulation of QM? Link?
Replies from: shminux, army1987↑ comment by Shmi (shminux) · 2012-06-10T21:11:36.227Z · LW(p) · GW(p)
The orthodox formulation of QM (given in Griffiths and most other modern QM texts) is the following:
Time evolution of an isolated system is governed by the time-dependent Schroedinger equation (this includes einselection when the system is no longer isolated).
The Born rule: after a measurement is performed on a system in a given state, the probability of observing a given eigenstate is given by the square modulus of the system's state's projection onto the eigenstate.
Note that there is no mention of collapse. The Born rule is the miracle step in the orthodox approach (and is an open problem in physics), just like it is in any other approach, including the MWI (EY admitted as much, if you want an argument from authority).
The interpretational confusion starts once you try to invent the reasons behind the Born rule. A proper scientific way to address the issue would be to construct a model which explains the Born rule AND makes other testable predictions separate from the orthodox QM. This conjunction is essential. There is no way to simply "dissolve" the question.
Replies from: pragmatist, None↑ comment by pragmatist · 2012-06-10T23:31:37.303Z · LW(p) · GW(p)
The orthodox formulation of QM (given in Griffiths and most other modern QM texts)...
Here's Griffiths on what he calls the "orthodox position" on quantum indeterminacy. This is from pages 3-5 of his text:
It was the act of measurement that forced the particle to "take a stand"... Jordan said it most starkly: "Observations not only disturb what is to be measured, they produce it... We compel [the particle] to assume a definite position."... Among physicists it has always been the most widely accepted position. Note, however, that if it is correct there is something very peculiar about the act of measurement...
We say that the wave function collapses upon measurement, to a spike... There are, then, two entirely distinct kinds of physical processes: "ordinary" ones, in which the wave function evolves in a leisurely fashion under the Schrodinger equation, and "measurements", in which [it] suddenly and discontinuously collapses.
Going by Griffiths' own account (and I picked Griffiths because he's the authority you cited), what Eliezer says about the orthodox interpretation is not a strawman. In fact, Griffiths explicitly discusses the view you call the "orthodox formulation", except he doesn't call it "orthodox". He describes it as an alternative to the orthodox position, and labels it the "agnostic position".
I think your flavor of instrumentalism is a respectable position in the foundational debate, but to describe it as the standard position is incorrect. I think there was a time when physicists in general had a more operationalist bent, but things have changed.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-06-11T01:38:20.413Z · LW(p) · GW(p)
Hmm, I suppose my personal classification is slightly different. Thanks for pointing that out.
The agnostic position is "shut up and calculate", which is basically resigning to one's inability to model the Born rule with anything better.
The instrumentalist position is to admit that doing research related to the Born rule origins is essential for progress in understanding the fundamentals of QM, but to also acknowledge that interpretations are not interesting physical models and at best have only an inspirational value.
The realist position (hidden variables are fundamental, collapse is fundamental, or MWI is fundamental, or Bohmian mechanics is fundamental) is the one that is easiest to falsify, as soon as it sticks its neck out with testable predictions (Bohm and collapse do not play well with relativity, local hidden variables run afoul of the Bell theorem, MWI makes no testable predictions whatsoever).
I suppose the confusion is that last paragraph: "There are, then, two entirely distinct kinds of physical processes: "ordinary" ones, in which the wave function evolves in a leisurely fashion under the Schrodinger equation, and "measurements", in which [it] suddenly and discontinuously collapses." This is a realist position, so I don't favor it, because it does not make any testable predictions.
I think your flavor of instrumentalism is a respectable position in the foundational debate, but to describe it as the standard position is incorrect.
OK, I will stop calling it standard, just instrumental.
I think there was a time when physicists in general had a more operationalist bent, but things have changed.
How?
↑ comment by [deleted] · 2012-06-10T22:23:32.134Z · LW(p) · GW(p)
Okay, the Orthodox QM is an informal specification of anticipated experimental results, and acknowledges decoherence as a thing. That is good to know.
My base claim is that decoherence can and will become macroscopic given time. Some physicists seem to disagree. Why? To my best expertise it is obviously implied by the mathematics behind it.
I am well aware the Born Rule is a mystery. Where the Born Probabilities come from, idk. Mangled Worlds seems like it might have the structure of a good explanation, it smells right, even if it isn't.
(EY admitted as much, if you want an argument from authority)
Now that was uncalled for.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-06-11T01:43:37.891Z · LW(p) · GW(p)
Okay, the Orthodox QM is an informal specification of anticipated experimental results, and acknowledges decoherence as a thing. That is good to know.
OK, as pragmatist pointed out, calling it orthodox is misleading. Sorry. From now on I'll be calling it instrumentalist. As for "informal", it's as formal as it gets, pure math.
My base claim is that decoherence can and will become macroscopic given time.
That's an experimental fact, you don't need to claim anything.
Some physicists seem to disagree.
Really? Who?
Why? To my best expertise it is obviously implied by the mathematics behind it.
Feel free to outline the math. The best sort-of-derivation so far, as far as I know, is given by Zurek and is known as einselection.
Replies from: None↑ comment by [deleted] · 2012-06-11T02:01:48.833Z · LW(p) · GW(p)
Perception of groups are often skewed. Mine was.
That update out of the way, why are we arguing? We do not disagree.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-06-11T04:02:35.845Z · LW(p) · GW(p)
Aumann ftw!
↑ comment by A1987dM (army1987) · 2012-06-11T17:44:53.999Z · LW(p) · GW(p)
I wouldn't call it “orthodox”, but see this:
In addition to these formal axioms one needs a rudimentary interpretation relating the formal part to experiments. The following minimal interpretation seems to be universally accepted.
MI. Upon measuring at times t_l (l=1,...,n) a vector X of observables with commuting components, for a large collection of independent identical (particular) systems closed for times t<t_l, all in the same state rho_0 = lim_{t to t_l from below} rho(t) (one calls such systems identically prepared), the measurement results are statistically consistent with independent realizations of a random vector X with measure as defined in axiom A5.
Note that MI is no longer a formal statement since it neither defines what 'measuring' is, nor what 'measurement results' are and what 'statistically consistent' or 'independent identical system' means. Thus MI has no mathematical meaning - it is not an axiom, but already part of the interpretation of formal quantum mechanics.
[...]
The lack of precision in statement MI is on purpose, since it allows the statement to be agreeable to everyone in its vagueness; different philosophical schools can easily fill it with their own understanding of the terms in a way consistent with the remainder.
[...]
MI is what every interpretation I know of assumes (and has to assume) at least implicitly in order to make contact with experiments. Indeed, all interpretations I know of assume much more, but they differ a lot in what they assume beyond MI.
Everything beyond MI seems to be controversial. In particular, already what constitutes a measurement of X is controversial. (E.g., reading a pointer, different readers may get marginally different results. What is the true pointer reading?)
↑ comment by Micaiah_Chang · 2012-06-10T20:19:33.157Z · LW(p) · GW(p)
So what is the orthodox formulation of QM, which is perfectly compatible with decoherance and doesn't resemble the straw man? I'm sorry if you've posted this elsewhere, but I'd really like to know what you think.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-06-10T21:13:28.280Z · LW(p) · GW(p)
See my reply to MagnetoHydroDynamics.
comment by jacob_cannell · 2012-06-09T08:31:12.534Z · LW(p) · GW(p)
Rolf, I'm curious about the actual computational models you use.
How much is or can be simulated? Do the simulations cover only the exact spatial-temporal slice of the impact, or the entire accelerator, or what? Does the simulation environment include some notion of the detector?
And on that note, the Copenhagen interpretation has always bothered me in that it doesn't seem computable. How can the collapse actually be handled in a general simulation?
Replies from: Dreaded_Anomaly, RolfAndreassen↑ comment by Dreaded_Anomaly · 2012-06-10T01:59:59.792Z · LW(p) · GW(p)
I am a graduate student in experimental particle physics, working on the CMS experiment at the LHC. Right now, my research work mainly involves simulations of the calorimeters (detectors which measure the energy deposited by particles as they traverse the material and create "showers" of secondary particles). The main simulation tool I use is software called GEANT, which stands for GEometry ANd Tracking. (Particle physicists have a special talent for tortured acronyms.) This is a Monte Carlo simulation, i.e. one that uses random numbers. The current version of the software is Geant4, which is how I will refer to it.
The simulation environment does have an explicit description of the detector. Geant4 has a geometry system which allows the user to define objects with specific material properties, size, and position in the overall simulated "world". A lot of work is done to ensure the accuracy of the detector setup (with respect to the actual, physical detector) in the main CMS simulation software. Right now, I am working on a simplified model with a less complicated geometry, necessary for testing upgrades to the calorimeters. The simplified geometry makes it easier to swap in new materials and designs.
Geant4 also has various physics lists which describe the various scattering and interaction processes that particles will undergo when they traverse a material. Different models are used for different energy ranges. The choice of physics list can make a significant difference in the results of the simulation. Like the geometry setup, the physics lists can be modified and tuned for better agreement with experimental data or to introduce new models. The user can specify how long the program should keep track of particles, as well as a minimum energy cutoff for secondary particles (generated in showers).
An often frustrating part of Geant4 simulations is that the computing time scales roughly linearly with the number of particles and the energy of the particles. One can mitigate this problem to some extent by running in parallel, e.g. submitting 10 jobs with 1000 events each, instead of one job with 10000 events. (Rolf talks about parallelization here.) However, as we keep getting more events with higher energies at the LHC, computing time becomes more of an issue.
Because of this, there is an ongoing effort in "fast simulation." To do a faster simulation than Geant4, we can come up with parameterizations that reproduce some essential characteristics of particle showers. Specifically, we parameterize the distribution of energy deposited in the material in both the longitudinal and transverse directions. (For example, the longitudinal distribution is often parameterized as a gamma distribution.) The development of these parameterizations can be complicated, but once we have an algorithm, the simulation just requires evaluating the functions at each step. Fast simulation essentially occurs above the particle level, which is what makes it faster. A caveat: this is much easier for electromagnetic showers (which involve only electrons and photons, and only a few main processes for high energies) than for hadronic showers (which involve numerous hadrons and processes, because the strong force plays a crucial role, and therefore the energy distributions fluctuate quite a bit).
What I have given here is an overview of the simulation study of detectors; in all of this, we send single particles through the detector material. We do the same thing in real life, with a "test beam", so that we can compare to data. The actual collisions at the LHC, however, produce events far more complex than a single particle test beam. We simulate those events, too (Rolf discusses some of that below), and there are even more complications involved. I am not as knowledgeable there (yet), and this post is long enough as it is, so I will hold off on elaborating. I hope this has given you some insight into modern particle simulations!
↑ comment by RolfAndreassen · 2012-06-09T22:00:54.657Z · LW(p) · GW(p)
So the reason we simulate things is, basically, to tell us things about the detector, for example its efficiency. If you observe 10 events of type X after 100k collisions, and you want to know the actual rate, you have to know your reconstruction efficiency with respect to that kind of event - if it's fifty percent (and that would be high in many cases) then you actually had 20 physical events (plus or minus 6, obviously) and that's the number you use in calculating whatever parameter you're trying to measure. So you write Monte Carlo simulations, saying "Ok, the D* goes to D0 and pi+ with 67.4% probability, then the D0 goes to Kspipi with 5% probability and such-and-such an angular distribution, then the Ks goes to pions pretty exclusively with this lifetime, then the pions are long-lived enough that they hit the detector, and it has such-and-such a response in this area." In effect we don't really deal with quantum mechanics at all, we don't do anything with the collapse. (Talking here about experiments - there are theorists who do, for example, grid calculations of strong-force interactions and try to predict the value of the proton mass from first principles.) Quantum mechanics only comes in to inform our choice of angular distributions. (Edit: Let me rephrase that. We don't really simulate the collapse; we say instead, "Ok, there's an X% chance of this, so roll a pseudorandom number between zero and one; if less than X, that's the outcome we're going with. We don't deal with the transition, as it were, from wave functions to particles.) The actual work is in 'swimming' the long-lived decay products through our simulation of the detector. The idea is to produce information in the same format as your real data, for example "voltage spike in channel 627 at timestamp 18", and then run the same reconstruction software on it as on real data. The difference is that you know exactly what was produced, so you can go back and look at the generated distributions and see if, for example, your efficiency drops in particular regions of phase space. Usually it does, for example if one particle is slow, or especially of course if it flies down the beampipe and doesn't hit the active parts of the detector.
Calibrating these simulations is a fairly major task that consumes a lot of physicist time and attention. We look at known events; at BaBar, for example, we would occasionally shut off the accelerator and let the detector run, and use the resulting cosmic-ray data for calibration. It helps that there are really only five particles that are long-lived enough to reach the detector, namely pion, kaon, neutron, electron, and proton; so we can study how these particles interact with matter and use that information in the simulations.
Another reason for simulating is to do blind studies. For example, suppose you want to measure the rate at which particle X decays to A+B+C. You need some selection criteria to throw away the background. The hihger your signal-to-noise ratio, the more accurately you can measure the rate, within some limits - there's a tradeoff in that the more events you have, the better the measurement. So you want to find the sweet spot between 0 data of 100% purity and 100% of the data at 2% purity. (Purity, incidentally, is usually defined as signal/(signal+background).) But you usually don't want to study the effects of your selections directly on data, because there's a risk of biasing yourself - for example, in the direction of agreement with a previous measurement of the same quantity. (Millikan's oil drops are the classic example, although simulations weren't involved.) So you tune your cuts on Monte Carlo events, and then when you're happy with them you go see if there's any actual signal in the data. This sort of thing is one reason physicists are reasonably good about publishing negative results, as in "Search for X"; it could be very embarrassing to work three years on a channel and then be unable to publish because there's no signal in the data. In such a case the conclusion is "If there had been data of such-and-such a level, we would have seen it (with 95% probability); we didn't; so we conclude that the process, if it occurs, has a rate lower than X".
comment by James_Miller · 2012-06-09T06:25:46.016Z · LW(p) · GW(p)
Might life in our universe continue forever? Does proton decay and the laws of thermodynamics, if nothing else, doom us?
Replies from: RolfAndreassen, trade_apprentice↑ comment by RolfAndreassen · 2012-06-09T19:34:46.445Z · LW(p) · GW(p)
Proton decay has not been observed, but even if it happens, it needn't be an obstacle to life, as such. For humans in anything remotely like our present form you need protons, but not for life in general. Entropy, however, is a problem. All life depends on having an energy gradient of some form or other; in our case, basically the difference between the temperature of the Sun and that of interstellar space. Now, second thermo can be stated as "All energy gradients decrease over a sufficiently long time"; so eventually, for any given form of life, the gradient it works off is no longer sharp enough to support it. However, what you can do is to constantly redesign life so that it will be able to live off the gradients that will exist in the next epoch. You would be trying to run the amount and speed of life down on an asymptotic curve that was nevertheless just slightly faster than the curve towards total entropy. At every epoch you would be shedding life and complexity; your civilisation (or ecology) would be growing constantly smaller, which is of course a rather alien thing for twenty-first century Westerners to consider. However, the idea is that by growing constantly smaller you never hit the wall where the gradient just cannot support your current complexity anymore, and instantly collapse to zero. An asymptote that never hits zero is, presumably, better than a curve of any shape that hits the wall and crashes - at least this is true if your goal is longevity; of course, pure survival is not the only goal of humans, so there's a value judgement to be made there. You might decide that it's better not to throw anyone out of the lifeboat and all starve together, rather than keep going at the price of endless sacrifice and endless shrinking. And, of course, if we can extrapolate to such incredibly distant beings at all, there's going to be quarrels over exactly who gets thrown out, and the resulting conflict might well make the asymptote shrink drastically, or collapse, as resources are used to fight instead of survive. To survive literally forever you need to be lucky every time; entropy only needs to be lucky once.
That said, even with total entropy you get the occasional quantum fluctuation that creates a small, local gradient again - in fact, arbitrarily large gradients if you wait arbitrarily long times; if somehow you were able to survive the period between such events, you could indeed live for ever. In fact, if you are able to wait long enough you will see a quantum fluctuation the size of the Big Bang. The problem is, of course, that a human, and probably life more generally as well, is extremely low-entropy compared to the sort of universe you get at 10^1000 years. In fact, interstellar space from our era would look rather low-entropy compared to that stuff. So the difficulty is to protect yourself against the, as it were, sucking vacuum that tries to rip the low entropy out of your body, without using up your reserves of energy on self-repair.
Overall, I'd say it doesn't look utterly hopeless, although it is subject to a Fermi paradox: If survival over arbitrary timescales is possible, why don't we see any survivors from previous BB-level events? If my account is correct, it seems unlikely that ours is the first such fluctuation.
Replies from: DanielLC↑ comment by DanielLC · 2012-06-10T22:20:53.211Z · LW(p) · GW(p)
You would be trying to run the amount and speed of life down on an asymptotic curve that was nevertheless just slightly faster than the curve towards total entropy.
Is the total subjective time finite or infinite?
That said, even with total entropy you get the occasional quantum fluctuation that creates a small, local gradient again - in fact, arbitrarily large gradients if you wait arbitrarily long times;
Does the expansion of space pose a problem? If you had a universe of a constant size, you'd expect fluctuations in entropy to create arbitrarily large gradients in energy if you wait long enough, but if it keeps spreading out, the probability of a gradient of a given size ever happening would be less than one, wouldn't it?
Also, wouldn't we all be Boltzmann brains if it worked like that?
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-11T19:43:11.321Z · LW(p) · GW(p)
Is the total subjective time finite or infinite?
The intention was to make it infinite, otherwise there's no use to the process. You'll notice that the laws of thermodynamics don't say anything about the shape of the downward trend, so it is at least conceivable that it allows a non-convergent series.
If you had a universe of a constant size, you'd expect fluctuations in entropy to create arbitrarily large gradients in energy if you wait long enough, but if it keeps spreading out, the probability of a gradient of a given size ever happening would be less than one, wouldn't it?
This doesn't look obvious to me. You get more vacuum to play with; the probability per unit volume should remain constant.
Also, wouldn't we all be Boltzmann brains if it worked like that?
Could be. Do you know we aren't? :)
Replies from: DanielLC↑ comment by DanielLC · 2012-06-11T19:54:52.519Z · LW(p) · GW(p)
This doesn't look obvious to me. You get more vacuum to play with; the probability per unit volume should remain constant.
I was assuming that there has to be stuff in space for stuff to happen. I guess I was wrong.
Do you know we aren't? :)
There's a chance that our experiences are just random, which we can't do much to reduce. All we can do is look at the probability of physics working a certain way given that we are not random. That cosmology would be ridiculously unlikely given that we are not random, because that would require that we not be Boltzmann brains, which is extraordinarily unlikely.
↑ comment by trade_apprentice · 2024-03-19T16:00:59.911Z · LW(p) · GW(p)
Not an answer, but there is a beautiful short sci-fi story by Isaac Asimov that touches on this theme called "The last question". I don't know if it is okay to provide a link but it isn't hard to find online.
comment by pleeppleep · 2012-06-09T22:35:27.724Z · LW(p) · GW(p)
When and why did you first start studying physics? Did you just encounter it in school, or did you first try to study it independently? Also, what made you decide to focus on your current area of expertise?
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-10T04:09:52.102Z · LW(p) · GW(p)
I took a physics course in my International Baccalaureate program in high school - if you're not familiar with IB, it's sort of the European version of AP - and it really resonated with me. There's just a lot of cool stuff in physics; we did things like building electric motors using these ancient military-surplus magnets that had once been installed in radars for coastal fortresses. Then when I went on to college, I took some math courses and some physics courses, and found I liked the physics better. In the summer of 2003 (I think) I went to CERN as a summer student, and had an absolute blast even though the actual work I was doing wasn't so very advanced. (I wrote a C interface to an ancient Fortran simulation program that had been kicking around since it was literally on punchcards. Of course the scientist who assigned me the task could have done it himself in a week, while it took me all summer, but that saved him a week and taught me some real coding, so it was a good deal for both of us.) So I sort of followed the path of least resistance from that point. I ended up doing my Master's degree on BaBar data. Then for my PhD I wanted to do it outside Norway, so it was basically a question of connections: My advisor knew someone who was looking for a grad student, wrote me a recommendation, and I moved to the US and started my PhD. Then, when it was time to choose a thesis topic, I actually, at first, chose something completely different, involving neutrinos and reconstructing a particular decay chain from missing energy and some constraints. It turned out we couldn't get a meaningful measurement with the data we had, there were too many random events that would fake the signal. So I switched to charm mixing, which (with perhaps the teensiest touch of hindsight bias) I now actually find more interesting anyway.
As you can see, 'decide' may be a somewhat strong word in this context; I've basically worked on what my advisors have suggested, and found it interesting enough not to quit. I suspect I could have worked on practically any problem with much the same results.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-06-10T04:15:39.068Z · LW(p) · GW(p)
As you can see, 'decide' may be a somewhat strong word in this context; I've basically worked on what my advisors have suggested, and found it interesting enough not to quit. I suspect I could have worked on practically any problem with much the same results.
Yep, sunk cost is not always a fallacy.
Replies from: Vanivercomment by [deleted] · 2012-06-10T04:59:12.399Z · LW(p) · GW(p)
What will happen if we don't find super-symmetry at the LHC? What will happen if we DO find it?
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-10T16:16:36.903Z · LW(p) · GW(p)
Well, if we do find it there are presumably Nobel prizes to be handed out to whoever developed the correct variant. If we don't, I most earnestly hope we find something else, so someone else gets to go to Stockholm. In either case I expect the grant money will keep flowing; there are always precision measurements to be made. Or were you asking about practical applications? I can't say I see any, but then they always do seem to come as a surprise.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-06-13T10:34:32.928Z · LW(p) · GW(p)
In either case I expect the grant money will keep flowing; there are always precision measurements to be made.
I somehow fear that if LHC finds the Higgs boson but no beyond-the-Standard-Model physics it'll become absurdly hard to get decent funding for anything in particle physics.
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-13T18:00:41.854Z · LW(p) · GW(p)
For large-scale projects like the LHC that may be true, but that's not the only way to do particle physics. You can accomplish a lot with low energies, high luminosities, and a few hundred million dollars - pocket change, really, on the scale of modern governments.
That said, it is quite possible that redirecting funding for particle physics into other kinds of science is the best investment at this point even taking pure knowledge as valuable for its own sake. There's such a thing as an opportunity cost and a discount rate; the physics will still be out there in 50 years when a super-LHC can be built for a much smaller fraction of the world's economic resources. If you have no good reason to believe that there's an extinction-risk-reducing or Good-Singularity-Causing breakthrough somewhere in particle physics, you shouldn't allow sentiment for the poor researchers who will, sob, have to take filthy jobs in some inferior field like, I don't know, astronomy, or perhaps even have to go into industry (shudder), to override your sense of where the low-hanging fruits are.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-06-13T18:30:11.067Z · LW(p) · GW(p)
you shouldn't allow sentiment for the poor researchers
The problem is that I've been planning to be such a researcher myself! (I'm in the final year of my MSc and probably I'm going to apply for a PhD afterwards. I'm specializing in cosmic rays rather than accelerators, though.)
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-13T21:33:45.294Z · LW(p) · GW(p)
Well, I am such a researcher, and so what I say to you applies just as much to myself: Sucks to be you. The privilege of working on what interests us in a low-pressure academic environment is not a god-given right; it depends on convincing those who pay for it - ultimately, the whole of the public - that we are a good investment. In the end we cannot make any honest argument for that except "Do you want to know how the universe ticks, or not?" Well, maybe they don't. Or maybe their understanding-the-universe dollars could, right now, be spent in better places. If so, sucks to be us. We'll have to go earn six-figure wages selling algebra to financiers. Woe, woe, woe is us.
comment by Andy_McKenzie · 2012-06-09T00:31:32.266Z · LW(p) · GW(p)
Henry Markrum says that it's inevitable that neuroscience will become a simulation science: http://www.nature.com/news/computer-modelling-brain-in-a-box-1.10066. Based on your experience in simulating and reconstructing events in particle physics, as well as your knowledge of the field, what do you think will be the biggest challenges the field of neuroscience faces as it transforms into this type of field?
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-09T01:14:13.723Z · LW(p) · GW(p)
I think their problems will be rather different from ours. We simulate particle collisions literally at the level of electrons (well, with some parametrisations for the interactions of decay products with detector material); I think it will be a while before we have the computer power to treat cells as anything but black boxes, and of course cells are huge on the scale of particle physics (as are atoms). That said, I suspect that the major issues will be in parallelising their simulation algorithms (for speed) and storing the output (so you don't have to run it again). Consider that at BaBar we used to think that ten times as much simulated data as real data was a good ratio, and 2 times was an informal minimum. But at BaBar we had an average of eleven tracks per event. At LHCb the average multiplicity is on the order of thousands, and it's become impossible to generate even as much simulated as real data, at least in every channel. You run out of both simulation resources and storage space. If you're simulating a whole brain, you've got way more objects, even taking atoms as the level of simulation. So you want speed so your grad students aren't sitting about for a week waiting for the current simulation to finish so they can tweak one parameter based on the result; and you get speed from parallelising and caching. "A week" is not hyperbole, by the way; for my thesis I parallelised fits because, with twenty CPUs crunching the same data, I could get a result overnight; at that rate I did graduate eventually. Running on one CPU, each fit would take two weeks or so, and I'd still be 'working' on it (that is, mainly reading webcomics), except of course that the funding would have run out some time ago.
comment by Peter_de_Blanc · 2012-06-17T08:18:59.424Z · LW(p) · GW(p)
What happens when an antineutron interacts with a proton?
Replies from: Dreaded_Anomaly, RolfAndreassen, wedrifid↑ comment by Dreaded_Anomaly · 2012-06-17T11:17:25.863Z · LW(p) · GW(p)
There are various possibilities depending on the energy of the particles.
An antineutron has valence quarks , , . A proton has valence quarks u, u, d. There are two quark-antiquark pairs here: u + and d + . In the simplest case, these annihilate electromagnetically: each pair produces two photons. The leftover u + becomes a positively-charged pion.
The pi+ will most often decay to an antimuon + muon neutrino, and the antimuon will most often decay to a positron + electron neutrino + muon antineutrino. (It should be noted that muons have a relatively long lifetime, so the antimuon is likely to travel a long distance before decaying, depending on its energy. The pi+ decays much more quickly.)
There are many other paths the interaction can take, though. The quark-antiquark pairs can interact through the strong force, producing more hadrons. They can also interact through the weak force, producing other hadrons or leptons. And, of course, there are different alternative decay paths for the annihilation products that will occur in some fraction of events. As the energy of the initial particles increases, more final states become available. Energy can be converted to mass, so more energy means heavier products are allowed.
Edit: thanks to wedrifid for the reminder of LaTeX image embedding.
Replies from: wedrifid↑ comment by wedrifid · 2012-06-17T13:57:32.071Z · LW(p) · GW(p)
An antineutron has valence quarks u¯, d¯, d¯. (The bar should really be directly above the letter to indicate antiparticles, but Markdown does not have an overline syntax as far as I know.)
Piece of cake:
![](http://www.codecogs.com/png.latex?\\bar\{u\},%20\\bar\{d\},%20\\bar\{d\})
Replies from: kpreid↑ comment by kpreid · 2012-06-17T23:10:11.436Z · LW(p) · GW(p)
Another approach is to use actual combining overlines U+0305: u̅, d̅, d̅. This requires no markup or external server support; however, these Unicode characters are not universally supported and some readers may see a letter followed by an overline or a no-symbol-available mark.
If you wish to type this and other Unicode symbols on a Mac, you may be interested in my mathematical keyboard layout.
↑ comment by RolfAndreassen · 2012-06-17T16:16:39.818Z · LW(p) · GW(p)
Very complicated things.
Both the antineutron and the proton are soups of gluons and virtual quarks of all kinds surrounding the three valence quarks Dreaded_Anomaly mentions; all of which interact by the strong force. The result is exceedingly intractable. Almost anything that doesn't actually violate a conservation law can come out of this collision. The most common case, nonetheless, is pions - lots of pions.
This is also the most common outcome from neutron-proton and neutron-antiproton collisions; the underlying quark interactions aren't all that different.
↑ comment by wedrifid · 2012-06-17T09:19:48.539Z · LW(p) · GW(p)
What happens when an antineutron interacts with a proton?
Good question.
I'm going to tender the guess that you get a kaboom (energy release equivalent to the mass of two protons) and a left over positron and neutrino spat out kind of fast.
comment by Stuart_Armstrong · 2012-06-10T09:20:17.691Z · LW(p) · GW(p)
May be slightly out of your area, but: do you believe the entropy-as-ignorance model is the correct way of understanding entropy?
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-10T16:14:04.115Z · LW(p) · GW(p)
Well no, it seems to me that there is a real physical process apart from our understanding of it. It's true that if you had enough information about a random piece of near-vacuum you could extract energy from it, but where does that information come from? You sort of have to inject it into the problem by a wave of the hand. So, to put it differently, if entropy is ignorance, then the laws of thermodynamics should be reformulated as "Ignorance in a closed system always increases". It doesn't really help, if you see what I mean.
Replies from: DanielLC, Manfred↑ comment by DanielLC · 2012-06-17T04:09:39.640Z · LW(p) · GW(p)
What I've heard seemed to indicate that, if you assigned a certain entropy density function to classical configuration space, and integrated it over a certain area to get entropy at the initial time, then let the area evolve, and integrated over that area to get the entropy at the final time, the entropy would stay constant.
This would mean that conservation of entropy is the actual physical process. Increase in entropy is just us increasing the size at the final time because we're not paying close enough attention to exactly where it should be.
Also, the more you know about the system, the smaller the area you could give in configuration space to specify it, and thus the lower the entropy.
Is this accurate at all?
↑ comment by Manfred · 2012-06-10T19:39:19.812Z · LW(p) · GW(p)
It's not really any more "unhelpful" than the statement that the number of bits of information needed to pick out a specific state of a system always increases. And that one's just straight Shannon entropy.
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-10T19:41:53.871Z · LW(p) · GW(p)
Sure; the point is that we have lots of equivalent formulations of entropy and I don't see the need to pick out one of them as the correct way of understanding it. One or another may be more intuitively appealing to particular students, or better suited to particular problems, but they're all maps and not territories.
Replies from: Manfred↑ comment by Manfred · 2012-06-10T19:52:24.275Z · LW(p) · GW(p)
Given a quantum state, you can always tell me the entropy of that specific quantum state. It's 0. If that's the territory, then where is entropy in the territory?
Replies from: army1987, RolfAndreassen, wnoise↑ comment by A1987dM (army1987) · 2012-06-10T20:21:32.911Z · LW(p) · GW(p)
There's something subtle about what's map and what's territory in density matrices. I'd like to think to the territory as a pure quantum state and to maps as mixed states, but... If John thinks the electron in the centre of this room is either spin-up or spin-down but he has no idea which (i.e. he assign probability 50% to each), and John thinks the electron in the centre of this room is either spin-east or spin-west but he has no idea which, then for any possible experiment whatsoever, the two of them would assign the same probability distribution to the outcome. There's something that puzzles me about this, but I'm not sure what that is.
↑ comment by RolfAndreassen · 2012-06-10T21:20:02.560Z · LW(p) · GW(p)
How much work can I extract from a system in that state? It's often useful to keep the theoretical eyes on the thermodynamical ball.
Replies from: Manfred↑ comment by Manfred · 2012-06-10T23:11:37.924Z · LW(p) · GW(p)
Helmholtz free energy (A, or F, or sometimes H) = E - TS in the thermodynamic limit, right? So A = E in the case of a known quantum state.
Replies from: RolfAndreassen, wnoise↑ comment by RolfAndreassen · 2012-06-11T01:06:54.424Z · LW(p) · GW(p)
So statistical mechanics was my weakest subject, and we're well beyond my expertise. But if you're really saying that we cannot extract any work from a system if we know its quantum state, that is highly counterintuitive to me, and suggests a missed assumption somewhere.
Replies from: Manfred↑ comment by Manfred · 2012-06-11T02:06:06.275Z · LW(p) · GW(p)
Helmholtz free energy (A) is basically the work you can extract (or more precisely, the free energy change between two states is the work you can extract by moving between those two states). So if A = E, where E is the energy that satisfies the Schroedinger equation, that means you can extract all the energy.
Sort of like Maxwell's demon.
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-11T02:10:32.842Z · LW(p) · GW(p)
Excuse me, the thought somehow rotated 180 degrees between brain and fingers. My point from a couple of exchanges up remains: How did you come to know this quantum state? If you magically inject information into the problem you can do anything you like.
Replies from: Incorrect, Manfred↑ comment by Incorrect · 2012-06-17T04:53:01.726Z · LW(p) · GW(p)
How did you come to know this quantum state?
We guessed and got really lucky?
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-17T16:05:05.639Z · LW(p) · GW(p)
In other words, magic. As I said, if you're allowed to use magic you can reduce the entropy as much as you like.
Replies from: Incorrect↑ comment by Incorrect · 2012-06-17T16:55:44.825Z · LW(p) · GW(p)
So is it impossible to guess and be lucky? Usually in this context the word "magic" would imply impossibility.
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-17T17:04:29.264Z · LW(p) · GW(p)
Well no, it's not impossible, but the chance of it happening is obviously 2^-N, where N is the number of bits required to specify the state. It follows that if you have 2^N states, you will get lucky and extract useful work once; which is, of course, the same amount of useful work you would get from 2^N states anyway, whether you'd made a guess or not. Even on the ignorance model of entropy, you cannot extract anything useful from randomness!
↑ comment by Manfred · 2012-06-11T03:48:02.334Z · LW(p) · GW(p)
How did you come to know this quantum state?
Measurements work well if you want to know what quantum state something is in. Or alternately, you could prepare the state from scratch - we can do it with quite a few atoms now.
And I hardly think doing a measurement with low degeneracy lets you do anything. You can't violate conservation of energy, or conservation of momentum, or conservation of angular momentum, or CPT symmetry. It's only thermodynamics that stops necessarily applying.
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-11T19:48:15.054Z · LW(p) · GW(p)
Measurements work well if you want to know what quantum state something is in. Or alternately, you could prepare the state from scratch - we can do it with quite a few atoms now.
Yes, ok, but what about the state of the people doing the measurements or the preparation? You can't have perfect information about them as well, that's second thermo for you. You could just as well skip the step that mentions information and say that "If we had a state of zero entropy we could make it do a lot of work". So you could, and the statement "If we had a state that we knew everything about we could make it do a lot of work" is equivalent, but I don't see where one is more fundamental, useful, intuitive, or correct than the other. The magic insertion of information is no more helpful than a magic reduction of entropy.
↑ comment by wnoise · 2012-06-17T07:24:03.441Z · LW(p) · GW(p)
Wouldn't Gibbs free energy be more appropriate? pV should be available for work too.
I find myself slightly confused by that definition. Energy in straight quantum mechanics (or classical Newtonian mechanics) is a torsor. There is no preferred origin, and adding any constant to all the states changes the evolution not at all. It therefore must not change the extractable work. So the free energies are clearly incorrectly defined, and must instead be defined relative to the ground state. In which case, yes, you could extract all the energy above that, if you knew the precise state, and could manipulate the system finely enough.
↑ comment by Manfred · 2012-06-17T09:30:00.796Z · LW(p) · GW(p)
1) Meh.
2) Right. I clarified this two posts down: "the free energy change between two states is the work you can extract by moving between those two states." So just like for energy, the zero point of free energy can be shifted around with no (classical) consequences, and what really matters (like what comes out of engines and stuff) is the relative free energy.
↑ comment by wnoise · 2012-06-17T07:03:05.161Z · LW(p) · GW(p)
Given a quantum state, you can always tell me the entropy of that specific quantum state. It's 0.
Only for pure states. Any system you have will be mixed.
Replies from: Manfred↑ comment by Manfred · 2012-06-17T09:19:47.024Z · LW(p) · GW(p)
I believe you mean "you will have incomplete information about any system you could really have."
Replies from: wnoisecomment by alex_zag_al · 2012-06-09T16:41:42.889Z · LW(p) · GW(p)
Of the knowledge of physics that you use, what of it would you know how to reconstruct or reprove or whatever? And what do you not know how to establish?
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-09T21:32:56.111Z · LW(p) · GW(p)
It depends on why I want to re-prove it. If I'm transported in a time machine back to, say, 1905, and want to demonstrate the existence of the atomic nucleus, then sure, I know how to run Rutherford's experiment, and I think I could derive enough basic scattering theory to demonstrate that the result isn't compatible with the mass being spread out through the whole atom. Even if I forgot that the nucleus exists, but remembered that the question of the mass distribution internal to an atom is an interesting one, the same applies. But to re-derive that the question is interesting, that would be tough. I think similar comments apply to most of the Standard Model: I am more or less aware of the basic experiments that demonstrated the existence of the quarks and whatnot, although in some cases the engineering would be a much bigger challenge than Rutherford's tabletop setup. Getting the math would be much harder; I don't think I have enough mathematical intuition to rederive quantum field theory. In fact I haven't thought about renormalisation since I forgot all about it after the exam, so absent gods forbid I should have to shake the infinities out. I think my role would be to describe and run the experiments, and let the theorists come up with the math.
comment by Andy_McKenzie · 2012-06-09T00:25:57.575Z · LW(p) · GW(p)
What do you see as the biggest practical technological application of particle physics (e.g., quarks and charms) that will come out in 4-10 years?
Replies from: RolfAndreassen, Luke_A_Somers↑ comment by RolfAndreassen · 2012-06-09T01:01:06.459Z · LW(p) · GW(p)
Unless you count spinoffs, I don't really see any. Big accelerator projects tend to be on the cutting edge of, for example magnet technology, or even a bit beyond. For example, the fused-silica photon-guide bars of the DIRC, Detector of Internally Reflected Cherenkov light, in the BaBar detector, were made to specifications that were a little beyond what the technology of the late nineties could actually manage. The company made a loss delivering them. Even now, we're talking about recycling the bars for the SuperB experiment rather than having new ones made. Similarly the magnets, and their cooling systems, of the LHC (both accelerator and detectors) are some of the most powerful on Earth. The huge datasets also tend to require new analysis methods, which is to say, algorithms and database handling; but here I have to caution that the methods in question might only be new to particle physicists, who after all aren't formally trained in programming and such. (Although perhaps we should be.)
So, to the extent that such engineering advances might make their way into other fields, take your choice. But as for the actual science, I think it is as close to knowledge for the sake of knowledge as you're going to get.
↑ comment by Luke_A_Somers · 2012-06-09T11:42:58.300Z · LW(p) · GW(p)
A few years ago, I heard about a very penetrating scanner for shipping containers, that used muons, which are second-generation particles, analogous to charm, but for leptons. I don't know whether it's still promising or not.
I don't know of any other applications for second- or third-generation particles. They all have so much shorter lifetimes than muons, it's hard to do anything with them.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2012-07-03T17:10:04.368Z · LW(p) · GW(p)
The muon-based scanner is still alive - it was mentioned in a recent APS news. Apparently, it relies on cosmic ray muons only.
comment by magfrump · 2012-06-09T20:12:38.667Z · LW(p) · GW(p)
How often do you invoke spectral gap theorems to choose dimensionality for your data, if ever?
If you do this ever, would it be useful to have spectral gap theorems for eigenvalue differences beyond the first?
(I study arithmetic statistics and a close colleague of mine does spectral theory so the reason I ask is that this seems like an interesting result that people might actually use; I don't know if it is at all achievable or to what extent theorems really inform data collection though.)
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-09T22:44:44.977Z · LW(p) · GW(p)
How often do you invoke spectral gap theorems to choose dimensionality for your data, if ever?
I have never done so; in fact I'm not sure what it means. Could you expand a bit?
Replies from: magfrump↑ comment by magfrump · 2012-06-11T00:50:52.428Z · LW(p) · GW(p)
Given a graph, one can write down the adjacency matrix for the graph; its first eigenvalue must be positive; scale the matrix so that the first eigenvalue is one. Now there is a theorem, known as the spectral gap theorem (there are parallel theorems that I'm not totally familiar with) which says that the difference between the first and second eigenvalue must be at least some number (on the order of 5% if I recall; I don't have a good reference handy).
I went to a colloquium where someone was collecting data which could be made to essentially look like a graph; they would they test for the dimensionality of the data by looking at the eigenvalues of this matrix and seeing when the eigenvalues dropped off such that the variance was very low. however, depending on the distribution of eigenvalues the cutoff point may be arbitrary. At the time, she said that a spectral gap for later eigenvalues would be useful, for making cutoff points less arbitrary (i.e. having a way to know if the next eigenvalue is definitively NOT a repeated eigenvalue because it's too far).
This isn't exactly my specialty so I'm sorry if my explanation is a little rough.
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-11T01:04:32.978Z · LW(p) · GW(p)
Ok, I've never used such an approach; I don't think I've ever worked with any data that could reasonably be made to look like a graph. (Unless perhaps it was raw detector hits before being reconstructed into tracks; and I've only brushed the edge of that sort of thing.) As for dimensionality, I would usually just count the variables. We are clearly talking about something very different from what I usually do.
Replies from: magfrump↑ comment by magfrump · 2012-06-12T02:57:08.512Z · LW(p) · GW(p)
The graph theory example was the only thing I thought of at the time but it's not really necessary; on recounting the tale to someone else in further detail I remembered that basically the person was just taking, say, votes as "yes"es and "no"s and tallying each vote as a separate dimension, then looking for what the proper dimension of the data was--so the number of variables isn't really bounded (perhaps it's 100) but the actual variance is explained by far fewer dimensions (in her example, 3).
So given a different perspective on what it is that fitting distributions means; does your work involve Lie groups, Weyl integration, and/or representation theory, and if so to what extent?
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-12T04:48:20.075Z · LW(p) · GW(p)
I don't understand how you get more than two dimensions out of data points that are either 0 or 1 (unless perhaps the votes were accompanied by data on age, sex, politics?) and anyway what I usually think of as 'dimension' is just the number of entries in each data point, which is fixed. It seems to me that this is perhaps a term of art which your friend is using in a specific way without explaining that it's jargon.
However, on further thought I think I can bridge the gap. If I understand your explanation correctly, your friend is looking for the minimum set of variables which explains the distribution. I think this has to mean that there is more data than yes-or-no; suppose there is also age and gender, and everyone above 30 votes yes and everyone below thirty votes no. Then you could have had dimensionality two, some combination of age and gender is required to predict the vote; but in fact age predicts it perfectly and you can just throw out gender, so the actual dimensionality is one.
So what we are looking for is the number of parameters in the model that explains the data, as opposed to the number of observables in the data. In physics, however, we generally have a fairly specific model in mind before gathering the data. Let me first give a trivial example: Suppose you have some data that you believe is generated by a Gaussian distribution with mean 0, but you don't know the sigma. Then you do the following: Assume some particular sigma, and for each event, calculate the probability of seeing that event. Multiply the probabilities. (In fact, for practical purposes we take the log-probability and add, avoiding some numerical issues on computers, but obviously this is isomorphic.) Now scan sigma and see which value maximises the probability of your observations; that's your estimate for sigma, with errors given by the values at which the log-probability drops by 0.5. (It's a bit involved to derive, but basically this corresponds to the frequentist 66%-confidence limits assuming the log-probability function is symmetric around the maximum.)
Now, the LessWrong-trained eye can, presumably, immediately see the underlying Bayes-structure here. We are finding the set of parameters that maximises the posterior probability of our data. In my toy example you can just scan the parameter space, point by point. For realistic models with, say, forty parameters - as was the case in my thesis - you have to be a bit more clever and use some sort of search algorithm that doesn't rely on brute force. (With forty parameters, even if you take only 10 points in each, you instantly have 10^40 points to evaluate - that is, at each point you calculate the probability for, say, half a million events with what may be quite a computationally expensive function. Not practical.)
The above is what I think of when I say "fitting a distribution". Now let me try to bring it back into contact with the finding-the-dimensions problem. The difference is that your friend is dealing with a set of variables such that some of them may directly account for others, as in my age/vote toy example. But in the models we fit to physics distributions, not all the parameters are necessarily directly observed in the event. An obvious example is the time resolution of the detector; this is not a property of the event (at least not solely of the event - some events are better measured than others) and anyway you can't really say that the resolution 'explains' the value of the time (and note that decay times are continuous, not multiple-choice as in most survey data.) Rather, the observed distribution of the time is generated by the true distribution convolved with the resolution - you have to do a convolution integral. If you measure a high (and therefore unlikely, since we're dealing with exponential decay) time, it may be that you really have an unusual event, or it may be that you have a common event with a bad resolution that happened to fluctuate up. The point, however, is that there's no single discrete-valued resolution variable that accounts for a discrete-valued time variable; it's all continuous distributions, derived quantities, and convolution integrals.
So, we do not treat our data sets in the way you describe, looking for the true dimensionality. Instead we assume some physics model with a fixed number of parameters and seek the probability-maximising value of those parameters. Obviously this approach has its disadvantages compared to the more data-driven method you describe, but basically this is forced upon us by the shape of the problem. It is common to try several different models, and report the variance as a systematic error.
So, to get back to Lie groups, Weyl integration, and representation theory: None of the above. :)
Replies from: magfrump↑ comment by magfrump · 2012-06-12T15:36:27.936Z · LW(p) · GW(p)
I definitely agree that the type of analysis I originally had in mind is totally different than what you are describing.
Thinking about distributions without thinking about Lie groups makes my brain hurt, unless the distributions you're discussing have no symmetries or continuous properties at all--my guess is that they're there but for your purposes they're swept under the rug?
But yeah in essence the "fitting a distribution" I was thinking is far less constrained I think--you have no idea a priori what the distribution is, so you first attempt to isolate how many dimensions you need to explain it. In the case of votes, we might look at F_2^N, think about it as being embedded into the 0s and 1s of [0,1]^N, and try to find what sort of an embedded manifold would have a distribution that looks like that.
Whereas in your case you basically know what your manifold is and what your distribution is like, but you're looking for the specifics of the map--i.e. the size (and presumably "direction"?) of sigma.
I don't think "disadvantages" is the right word--these processes are essentially solving for totally unrelated unknowns.
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-12T17:05:59.045Z · LW(p) · GW(p)
Thinking about distributions without thinking about Lie groups makes my brain hurt, unless the distributions you're discussing have no symmetries or continuous properties at all--my guess is that they're there but for your purposes they're swept under the rug?
That is entirely possible; all I can tell you is that I've never used any such tool for looking at physics data. And I might add that thinking about how to apply Lie groups to these measurements makes my brain hurt. :)
Replies from: magfrump↑ comment by magfrump · 2012-06-13T06:10:07.210Z · LW(p) · GW(p)
tl;dr: I like talking about math.
Fair enough :)
I just mean... any distribution is really a topological object. If there are symmetries to your space, it's a group. So all distributions live on a Lie group naturally. I assume you do harmonic analysis at least--that process doesn't make any sense unless it lives on a Lie group! I think of distributions as essentially being functionals on a Lie group, and finding a fitting distribution is essentially integrating against its top-level differentials (if not technically at least morally.)
But if all your Lie groups are just vector spaces and the occasional torus (which they might very well be) then there might be no reason for you to even use the word Lie group because you don't need the theory at all.
Replies from: jsteinhardt, RolfAndreassen↑ comment by jsteinhardt · 2012-06-13T06:46:57.137Z · LW(p) · GW(p)
You can do harmonic analysis on any locally compact abelian group, see e.g. Pontryagin duality.
Replies from: magfrump↑ comment by magfrump · 2012-06-13T17:14:46.678Z · LW(p) · GW(p)
"locally compact" implies you have a topology--maybe I should be saying "topological group" rather than "Lie group," though.
Replies from: None↑ comment by [deleted] · 2012-06-13T18:19:12.610Z · LW(p) · GW(p)
All Lie groups already have a topology. They're manifolds, after all.
Replies from: magfrump↑ comment by magfrump · 2012-06-13T18:26:56.596Z · LW(p) · GW(p)
Yes. My original statement was that harmonic analysis is limited to Lie groups. jsteinhardt observed that any locally compact abelian group can have harmonic analysis done on it--some of these (say, p-adic groups) are not Lie groups, since they have no smooth structure, though they are still topological groups.
So I was trying to be less specific by changing my term from Lie group to topological group.
Replies from: None↑ comment by RolfAndreassen · 2012-06-13T18:38:58.496Z · LW(p) · GW(p)
any distribution is really a topological object.
I find this interesting, but I like to apply things to a specific example so I'm sure I understand it. Suppose I give you the following distribution of measurements of two variables (units are GeV, not that I suppose this matters):
1.80707 0.148763 1.87494 0.151895 1.86805 0.140318 1.85676 0.143774 1.85299 0.150823 1.87689 0.151625 1.87127 0.14012 1.89415 0.145116 1.87558 0.141176 1.86508 0.14773 1.89724 0.149112
What sort of topological object is this, or how do you go about treating it as one? Presumably you can think of these points in mD-deltaM space as being two-dimensional vectors. N-vectors are a group under addition, and if I understand the definition correctly they are also a Lie group. But I confess I don't understand how this is important; I'm never going to add together two events, the operation doesn't make any sense. If a group lives in a forest and never actually uses its operator, does it still associate, close, identify, and invert? (I further observe that although 2-vectors are a group, the second variable in this case can't go below 0.13957 for kinematic reasons; the subset of actual observations is not going to be closed or invertible.)
I'm not sure what harmonic analysis is; I might know it by another name, or do it all the time and not realise that's what it's called. Could you give an example?
Replies from: magfrump↑ comment by magfrump · 2012-06-14T00:43:02.959Z · LW(p) · GW(p)
My attempts at putting LaTeX notation here didn't work out, so I hope this is at all readable.
I would not call the data you gave me a distribution. I think of a distribution as being something like a Gaussian; some function f where, if I keep collecting data, and I take the average sum of powers of that data, it looks like the integral over some topological group of that function.
so: lim n->\infty sum.{k=1}^n g(x.k,y.k) = \int_{R^2} f(x,y)g(x,y) dx ^ dy for any function g on R^2
usually rather than integrating over R^2, I would be integrating over SU(2) or some other matrix group; meaning the group structure isn't additive; usually I'd expect data to be like traces of matrices or something; for example on the appropriate subgroup of GL(2,R)+ these traces should never be below two; that sort of kinematic reason should translate into insight about what group you're integrating over.
When you say "fitting distributions" I assume you're looking for the appropriate f(x) (at least, after a fashion) in the above equality; minimizing a variable which should be the difference between the limits in some sense.
I may be a little out of my depth here, though.
Sorry I didn't mean harmonic analysis, I meant Fourier analysis. I am under the impression that this is everywhere in physics and electrical engineering?
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-14T19:34:22.437Z · LW(p) · GW(p)
I was a little sloppy in my language; strictly speaking 'distribution' does refer to a generating function, not to the generated data.
When you say "fitting distributions" I assume you're looking for the appropriate f(x) (at least, after a fashion) in the above equality; minimizing a variable which should be the difference between the limits in some sense.
Yes, exactly.
Sorry I didn't mean harmonic analysis, I meant Fourier analysis.
We certainly do partial waves, but not on absolutely everything. Take a detector resolution with unknown parameters; it can usually be well modelled by a simple Gaussian, and then there's no partial waves, there's just the two parameters and the exponential.
lim n->\infty sum.{k=1}^n g(x.k,y.k) = \int_{R^2} f(x,y)g(x,y) dx ^ dy for any function g on R^2
Maybe something got lost in the notation? In the limit of n going to infinity the sum should likewise go to infinity, while the integral may converge. Also it's not clear to me what the function g is doing. I prefer to think in terms of probabilities: We seek some function f such that, in the limit of infinite data, the fraction of data falling within (x0, x0+epsilon) equals the integral on (x0, x0+epsilon) of f with respect to x, divided by the integral over all x. Generalise to multiple dimensions as required; taking the limit epsilon->0 is optional.
average sum of powers of that data,
I'm not sure what an average sum of powers is; where do you do this in the formula you gave? Is it encapsulated in the function g? Does it reduce to "just count the events" (as in the fraction-of-events goal above) in some limit?
Replies from: magfrump↑ comment by magfrump · 2012-06-14T20:07:47.107Z · LW(p) · GW(p)
Maybe something got lost in the notation?
Yes, there was supposed to be a 1/n in the sum, sorry!
Essentially what the g is doing is taking the place of the interval probabilities; for example, if I think of g as being the characteristic function on an interval (one on that interval and zero elsewhere) then the sum and integral should both be equal to the probability of a point landing in that interval. Then one can approximate all measurable functions by characteristic functions or somesuch to make the equivalence.
In practice (for me) in Fourier analysis you prove this for a basis, such as integer powers of cosine on a close interval, or simply integer powers on an open interval (these are the moments of a distribution).
I'm not sure what an average sum of powers is; where do you do this in the formula you gave? Is it encapsulated in the function g?
Yes; after you add in the 1/n hopefully the "average" part makes sense, and then just take g for a single variable to be x^k and vary over integers k. And as I mentioned above, yes I believe it does reduce to just "count the events;" just if you want to prove things you need to count using a countable basis of function space rather than looking at intervals.
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-14T20:26:27.330Z · LW(p) · GW(p)
It looks to me like we've bridged the gap between the approaches. We are doing the same thing, but the physics case is much more specific: We have a generating function in mind and just want to know its parameters, and we look only at the linear average, we don't vary the powers (*). So we don't use the tools you mentioned in the comment that started this thread, because they're adapted to the much more general case.
(*) Edit to add: Actually, on further thought, that's not entirely true. There are cases where we take moments of distributions and whatnot; a friend of mine who was a PhD student at the same time as me worked on such an analysis. It's just sufficiently rare (or maybe just rare in my experience!) that it didn't come to my mind right away.
Replies from: magfrump↑ comment by magfrump · 2012-06-14T22:01:02.417Z · LW(p) · GW(p)
Okay, so my hypothesis that basically all of the things that I care about are swept under the rug because you only care about what I would call trivial cases was essentially right.
And it definitely makes sense that if you've already restricted to a specific function and you just want parameters that you really don't need to deal with higher moments.
comment by Luke_A_Somers · 2012-06-09T11:33:15.670Z · LW(p) · GW(p)
Experimental condensed matter postdoc here. Specializing in graphene and carbon nanotubes, and to a lesser extent mechanical/electronic properties of DNA.
Replies from: Tripitaka, epigeios↑ comment by Tripitaka · 2012-06-09T14:28:40.763Z · LW(p) · GW(p)
Carbon nanotubes in space elevators: Nicolas Pugno showed that the strenght of macroscale CNs is reduced to a theoretical limit of 30 gigapascal, with a needed strenght of 62 GPa for some desings... Whats the state of the art in tensile strenght of macro-scale CNs? Any other thoughts related to materials for space elevators?
Replies from: Luke_A_Somers, Luke_A_Somers↑ comment by Luke_A_Somers · 2012-11-27T16:03:24.016Z · LW(p) · GW(p)
I just read an article raising a point which is so obvious in retrospect that I'm shaking my head that it never occurred to me.
Boron Nitride nanotubes have a very similar strength to carbon nanotubes, but much much stronger interlayer coupling. They are a much better candidate for this task.
↑ comment by Luke_A_Somers · 2012-06-11T17:10:26.258Z · LW(p) · GW(p)
I'm not really up to speed on that, being more on the electronics end. Still, I've maintained interest. Personally, every year or so I check in with the NASA contest to see how they're doing.
http://www.nasa.gov/offices/oct/early_stage_innovation/centennial_challenges/tether/index.html
Last I heard, pure carbon nanotube yarn was a little stronger by weight than copper wire. Adding a little binder helps a lot.
Pugno's assumption of 100 nm long tubes is very odd - you can grow much longer tubes, even in fair quantity. Greater length helps a lot. The main mechanism of weakness is slippage, and having longer tubes provides more grip between neighboring tubes.
This is more in the realm of a nitpick, though. If I were to ballpark how much of a tensile strength discount we'd have to swallow on the way up from nanoscale, I would have guessed about 50%, which is not far off from his meticulously calculated 70%.
I'd love for space elevators to work; it's not looking promising. Not on Earth, at least. Mars provides an easier problem: lower mass and a reducing atmosphere ease the requirements on the cable. My main hope is, if we use a different design like a mobile rotating skyhook instead of a straight-up elevator, we could greatly reduce the required length, and also to some extent the strength. That compromise may be achievable.
↑ comment by epigeios · 2012-06-12T04:18:37.868Z · LW(p) · GW(p)
This might be out in left field, but:
Can water be pumped through carbon nanotubes? If so, has anyone tried? If they have, has anyone tried running an electric current through a water-filled nanotube? How about a magnetic current? How about light? How about sound?
Can carbon nanotubes be used as an antenna? If they can be filled with water, could they then be used more effectively as an antenna?
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2012-07-27T13:48:14.547Z · LW(p) · GW(p)
Sorry for the delayed response - I don't see a mechanism for reply notifications.
You can definitely cram water into carbon nanotubes, but they're hydrophobic, so it's not easy.
You can run an electric current through carbon nanotubes whether they've got water in them or not.
Spin transport is possible in perfect carbon nanotubes (magnetic current).
Carbon nanotubes are strong antennas, so they strongly interact with light. However, they are way way way too small to be waveguides for optical wavelengths, and EM radiation with an appropriate wavelength is way way way too penetrating. Water within them would just cause more scattering, not help carry current. Water carries ionic currents, which are orders of magnitude slower than electron or hole currents in nanotubes.
You can definitely carry sound with carbon nanotubes - google 'nanotube radio'.
Replies from: shokwavecomment by [deleted] · 2012-06-09T04:17:52.480Z · LW(p) · GW(p)
Real question: When you read a book aimed at the educated general public like The God Particle by Leon Lederman, do you consider it to be reasonably accurate or full of howlingly inaccurate simplifications?
Fun question: Do you have the ability to experimentally test http://physicsworld.com/cws/article/news/2006/sep/22/magnet-falls-freely-in-superconducting-tube ? Somebody's got to have a tubular superconductor just sitting around on a shelf.
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-09T05:00:07.360Z · LW(p) · GW(p)
I haven't actually read a popular-science book in physics for quite some time, so I can't really answer your question.The phrase "The God Particle" always makes me wince, it's exactly the sort of hyperbole that leads to howling misunderstandings of what physics is about. It's not Lederman's fault, though.
I've seen the magnet-in-tube experiment done with an ordinary conductor, which is actually more interesting to watch: If you want to see a magnet falling freely, you can use an ordinary cardboard tube! As for superconductors, it could be the solid-state guys have one lying around, but I haven't asked. You'd have to cool it to liquid-helium temperatures, or liquid nitrogen if you have a cool modern one, so I don't know that you'd actually be able to see the magnet fall.
The coolest tabletop experiment I've personally done (not counting taking a screwdriver to the BaBar detector) is building cloud chambers and watching the cosmic rays pass through.
Replies from: None↑ comment by [deleted] · 2012-06-09T08:00:32.533Z · LW(p) · GW(p)
It's not Lederman's fault, though.
He joked that he wanted to call it The Goddamned Particle.
I've seen the magnet-in-tube experiment done with an ordinary conductor
Oh, me too, in high school.
If you want to see a magnet falling freely, you can use an ordinary cardboard tube!
Well, in the link, there seemed to be some uncertainty as to whether a magnet in a superconducting tube would fall freely or be pinned.
You'd have to cool it to liquid-helium temperatures, or liquid nitrogen if you have a cool modern one, so I don't know that you'd actually be able to see the magnet fall.
There's this other axis you can look through...
comment by MrMind · 2012-06-22T13:51:05.619Z · LW(p) · GW(p)
I always wondered why there is so little study/progress on plasma Wakefield acceleration, given that there's such a need of more and more powerful accelerator to study presently unaccessible energy regions. Is that because there's a fundamental limit which cannot be used to create giant plasma based accelerator or it's just a poorly explored avenue?
Replies from: RolfAndreassen, Dreaded_Anomaly, shminux↑ comment by RolfAndreassen · 2012-06-25T22:15:16.912Z · LW(p) · GW(p)
Sorry, I missed your post. As shminux says, new concepts take time to mature; the first musket was a much poorer weapon than the last crossbow. Then you have to consider that this sort of engineering problem tends intrinsically to move a bit slower than areas that can be advanced by data analysis. Tweaking your software is faster than taking a screwdriver to your prototype, and can be done even by freshly-minted grad students with no particular risk of turning a million dollars of equipment into very expensive and slightly radioactive junk. It is of course possible for an inexperienced grad student to wipe out his local copy of the data which he has filtered using his custom software, and have to redo the filtering (example is completely hypothetical and certainly nothing to do with me), thus costing himself a week of work and the experiment a week of computer-farm time. But that is tolerable. For engineering work you want experienced folk.
Replies from: TimS↑ comment by Dreaded_Anomaly · 2012-07-12T22:13:03.925Z · LW(p) · GW(p)
It's a growing field. One of my roommates is working on plasma waveguides, a related technology.
↑ comment by Shmi (shminux) · 2012-06-22T17:16:02.632Z · LW(p) · GW(p)
I'm not an experimental physicist, but from what I know, the whole concept is relatively new and it takes time to get it to the point where it can compete with the technologies that had been perfected over many decades. With the groups at SLAC, CERN and Max Planck Institute (among others) working on it, we should expect to see some progress within a decade or so.
comment by DanielVarga · 2012-06-11T21:56:15.621Z · LW(p) · GW(p)
Can photon-photon scattering be harnessed to build a computer that consists of nothing but photons as constituent parts? I am only interested in theoretical possibility, not feasibility. If the question is too terse in this form, I am happy to elaborate. In fact, I have a short writeup that tries to make the question a bit more precise, and gives some motivation behind it.
Replies from: RolfAndreassen, shminux↑ comment by RolfAndreassen · 2012-06-12T04:54:16.778Z · LW(p) · GW(p)
Well, it depends on what you mean by "nothing but". You can obviously (in principle) make a logic gate of photon beams, but I don't see how you can make a stable apparatus of nothing but photons. You have to generate the light somehow.
NB: Sometimes the qualifier "in principle" is stronger than other times. This one is, I feel, quite strong.
Replies from: DanielVarga↑ comment by DanielVarga · 2012-06-12T12:21:29.499Z · LW(p) · GW(p)
What I mean by "in principle" is not that different from what Fredkin and Toffoli mean by it when talking about their billiard ball computer. The intuition is that when you figured out that some physical system can be harnessed for computation in principle, then you can start working on noise tolerance and energy consumption, and usually it turns out that those are not the show-stopper parts. And when I eventually try to link "in principle" to "in practice", I am still not talking about the scale of human engineering. You say you need to generate light for the system, and a strong gravitational field to trap the photons? I say, fine, I'll rearrange these galaxies into laser guns and gravitational photon traps for you.
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-12T16:50:50.470Z · LW(p) · GW(p)
Fair enough. I'm just saying, the galaxies aren't made purely of light, so you still don't have a computer of "nothing but" photons. But sure, the logic elements could be purely photonic.
↑ comment by Shmi (shminux) · 2012-06-11T22:12:58.278Z · LW(p) · GW(p)
It's an intriguing idea, a pure photon-based gate based on elastic scattering of photons, however I don't see how such a system would function, even in principle. Feel free to elaborate. Also, presumably constructing an equivalent electron- or neutron-based gate would be easier.
Replies from: DanielVarga↑ comment by DanielVarga · 2012-06-12T00:01:50.638Z · LW(p) · GW(p)
It's an intriguing idea, a pure photon-based gate based on elastic scattering of photons, however I don't see how such a system would function, even in principle.
I have no idea either. All that I have is a flawed analogy: We could in principle build a computer consisting of nothing but billiard balls as constituent parts. This would work even if meeting billiard balls, instead of bouncing off each other, just changed their trajectories slightly, with a very small probability. I'd like to know whether this crude view of photon-photon scattering is A. a simplification that helps focus on the interesting part of the question, or B. a terrible misunderstanding.
Now I'll tell the original motivation behind the question. As an old LW regular, you have probably seen some phrase like "turn our future light cone into computronium" tossed out during some FAI discussion. What I am interested in is how to actually do that optimally, if you are limited by nothing but the laws of physics. In particular, I am interested in whether the optimal solution involves light-speed (or asymptotically light-speed) expansion, or (for entropy or other considerations) does not actually end up eating the whole light cone.
Obviously this is not my home turf, so maybe it is not even true that the scattering question is relevant at all when we try to answer the computronium question. I would appreciate any insights about either of them or their relationship.
Replies from: pengvado, shminux, Dreaded_Anomaly↑ comment by pengvado · 2012-06-12T14:49:40.331Z · LW(p) · GW(p)
I am interested in whether the optimal solution involves light-speed (or asymptotically light-speed) expansion, or (for entropy or other considerations) does not actually end up eating the whole light cone.
The form of the expansion has very little to do with the form of the computronium.
Launch von Neumann probes at c-ε. They can be tiny, so the energy cost to accelerate them is negligible compared to the energy you can harvest from a new star system. When one arrives, it builds a few more probes and launches them at further stars, then turns all the local matter into computers. The computers themselves don't need to move quickly, since the probes do all the long-distance colonization.
Replies from: DanielVarga↑ comment by DanielVarga · 2012-06-12T15:13:31.606Z · LW(p) · GW(p)
You are right. Originally I became interested in purely photon-based computation because I had an even more speculative idea that seemed to require it. If you have a system that terraforms everything in its path and expands with exactly the speed of light, then you are basically unavailable to outside observation. You can probably see where this line of thought leads. I am aware of the obvious counterargument, but as I explained there, it is a bit weaker than it first appears.
↑ comment by Shmi (shminux) · 2012-06-12T04:05:01.650Z · LW(p) · GW(p)
We could in principle build a computer consisting of nothing but billiard balls as constituent parts.
I am quite sure that would be impossible without the balls being constrained by some other forces, such as gravity or outside walls.
Replies from: DanielVarga↑ comment by DanielVarga · 2012-06-12T12:21:12.487Z · LW(p) · GW(p)
You can build outside walls out of billiard balls. Eventually, such a system will disintegrate, but this is no different from any other type of computer. The important thing is that for any given computation length you can build such a system. The size of the system will grow with required computation length, but only polynomially.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-06-12T15:55:35.384Z · LW(p) · GW(p)
I would be interested in seeing a metastable gate constructed solely out of billiard balls. Care to come up with a design?
Replies from: DanielVarga↑ comment by DanielVarga · 2012-06-12T17:58:22.100Z · LW(p) · GW(p)
Ah, now I see your point. I had this misconception that if you send a billiard ball into a huge brick-wall of billiard balls, it will bounce back. Okay, I don't have a design.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-06-12T21:51:53.770Z · LW(p) · GW(p)
if you send a billiard ball into a huge brick-wall of billiard balls, it will bounce back.
It sure will, after imparting some momentum to the wall. My point is that I do not know how to construct a gate out of components interacting only through repulsive forces. I am not saying that it is impossible, I just do not see how it can be done.
Replies from: DanielVarga↑ comment by DanielVarga · 2012-06-12T22:59:54.096Z · LW(p) · GW(p)
It sure will
How much momentum will it lose before it bounces back? If a large enough wall can make this arbitrarily small, then I think the Fredkin and Toffoli billiard gates can be built out of a thick wall of billiard balls. Lucky thing, in this model there is no friction, so gates can be arbitrarily large. Sure, the system might start to misbehave after the walls move by epsilon, but this doesn't seem like a serious problem. In the worst case, we can use throw-away gates that are abandoned after one use. That model is still as strong as Boolean circuits.
↑ comment by Dreaded_Anomaly · 2012-06-12T09:34:33.375Z · LW(p) · GW(p)
The difference I see between photons and your example with billiard balls is that billiard balls have a rest frame. In other words, you can set them up so that they have no preexisting motion relative to you, and any change in their positions is due to your inputs. You can't do this with photons in a vacuum; they are massless, and must always move at c.
Photon-photon scattering is also a rare process in quantum electrodynamics. If you look at the Feynman diagram:
It has four vertices. Each vertex gives the cross-section of the process another factor of the fine structure constant α, which is a small number, about 1/137. A process like electron-electron or electron-positron scattering, on the other hand, has diagrams with only two vertices, so only two factors of α. (Of course, cross-sections also depend on mass, momentum, and so forth, but this gives a very simple heuristic for comparing processes.) The additional factor of α² ~ 0.00005 makes the cross section tiny compared to common QED processes.
If you want to use photons for computing, photonic crystals are your best bet, although the technology is still in early stages of development.
Replies from: DanielVarga↑ comment by DanielVarga · 2012-06-12T12:21:05.208Z · LW(p) · GW(p)
I don't know much about photon-photon scattering, but I do know that the cross section is very small. I see this as something that does not make a difference from a strictly theoretical point of view, but that might be because I don't understand the issues. Photonic crystals are not really relevant for my thought experiments, because you definitely can't build computers out of them that expand with the asymptotic speed of light. Maybe if you can turn regular material into photonic crystal by bombarding it with photons.
Replies from: Dreaded_Anomaly↑ comment by Dreaded_Anomaly · 2012-06-12T13:18:52.190Z · LW(p) · GW(p)
If two billiard balls come to occupy an overlapping volume in space at the same time, they will collide with probability (1 - ε) for ε about as small as we can imagine. However, photons will only scatter off each other rarely. Photons are bosons, so the vast majority of the time, they will just pass right through each other. That doesn't give you a dependable logic gate.
Replies from: DanielVarga↑ comment by DanielVarga · 2012-06-12T13:48:08.755Z · LW(p) · GW(p)
Maybe you are right, but it is not immediately obvious to me that small cross-section is a deadly problem. You shouldn't look at one isolated photon-photon encounter as a logic gate. Even an ordinary electronic transistor would not work without error correction. Using error correction, you can build complex systems that seem like magic when you attempt to understand them at the level of individual electrons.
comment by witzvo · 2012-06-09T10:58:28.635Z · LW(p) · GW(p)
When I read about quantum mechanics they always talk about "observation" as if it meant something concrete. Can you give me an experimental condition in which a waveform does collapse and another where it does not collapse, and explain the difference in the conditions? E.g. in the two slit experiment, when exactly does the alleged "observation" happen?
Replies from: RolfAndreassen, Luke_A_Somers↑ comment by RolfAndreassen · 2012-06-09T21:35:46.945Z · LW(p) · GW(p)
'Observation' is a shorthand (for historical reasons) for 'interaction with a different system', for example a detector or a human; but a rock will do as well. I would actually suggest you read the Quantum Mechanics Sequence on this point, Eliezer's explanation is quite good.
Replies from: Ezekiel, witzvo↑ comment by Ezekiel · 2012-06-10T01:24:01.134Z · LW(p) · GW(p)
Eliezer's explanation hinges on the MWI being correct, which I understand is currently the minority opinion. Are we to understand that you're with the minority on this one?
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-10T02:43:38.498Z · LW(p) · GW(p)
Well, yes. But if you don't like MWI, you can postulate that the collapse occurs when the mass of the superposed system grows large enough; in other words, that the explanation is somewhere in the as-yet-unknown unification of QM and GR. Of course, every time someone succeeds in maintaining a superposition of a larger system, you should reduce your probability for this explanation. I think we are now up to objects that are actually visible with the naked eye.
Replies from: witzvo↑ comment by witzvo · 2012-06-10T05:43:06.528Z · LW(p) · GW(p)
When I hear the phrase "many worlds interpretation," I cringe. This is not because I know something about the science (I know nothing about the science), it's because of confusing things I've heard in science popularizations. This reaction has kept me from reading Eliezer's sequence thus far, but I pledge to give it a fair shot soon.
Above you gave me a substitute phrase to use when I hear "observation." Is there a similar substitute phrase to use for MWI? Should I, for example, think "probability distribution over a Hilbert space" when I hear "many worlds", or is it something else?
Edit: Generally, can anyone suggest a lexicon that translates QM terminology into probability terminology?
Replies from: Douglas_Knight, Grognor, RolfAndreassen, witzvo↑ comment by Douglas_Knight · 2012-06-10T15:03:01.649Z · LW(p) · GW(p)
I'm not sure I'm addressing your question, but I advocate in place of "many worlds interpretation" the phrase "no collapse interpretation."
Replies from: witzvo, shminux↑ comment by witzvo · 2012-06-10T16:11:59.155Z · LW(p) · GW(p)
I'm not sure I'm addressing your question, but I advocate in place of "many worlds interpretation" the phrase "no collapse interpretation."
That's very helpful. It will help me read the sequence without being prejudiced by other things I've heard. If all we're talking about here is the wavefunction evolving according to Schr\:odinger's equation, I've got no problems, and I would call the "many worlds" terminology extremely distracting. (e.g. to me it implies a probability distribution over some kind of "multiverse", whatever that is).
↑ comment by Shmi (shminux) · 2012-06-10T18:33:48.660Z · LW(p) · GW(p)
Personally, I advocate "no interpretation", in a sense "no ontology should be assigned to a mere interpretation".
Replies from: Viliam_Bur, witzvo↑ comment by Viliam_Bur · 2012-06-11T14:40:20.635Z · LW(p) · GW(p)
I am curious how exactly would this aproach work outside of quantum physics, specifically in areas more simple or more close to our intuition.
I think we should be use the same basic cognitive algorithms for thinking about all knowledge, not make quantum physics a "separate magisterium". So if the "no interpretation" approach is correct, seems to me that it should be correct everywhere. I would like to see it applied to a simple physics or even mathematics (perhaps even such as 2+2=4, but I don't want to construct a strawman example here).
Replies from: shminux↑ comment by Shmi (shminux) · 2012-06-11T14:58:19.215Z · LW(p) · GW(p)
I was describing instrumentalism in my comment, and so far it has been working well for me in other areas as well. In mathematics, I would avoid arguing whether a theorem that is unprovable in a certain framework is true or false. In condensed matter physics, I would avoid arguing whether pseudo-particles, such as holes and phonons, are "real". In general, when people talk about a "description of reality" they implicitly assume the map-territory model, without admitting that it is only a (convenient and useful) model. It is possible to talk about observable phenomena without using this model. Specifically, one can describe research in natural science as building a hierarchy of models, each more powerful than the one before, without mentioning the world "reality" even once. In this approach all models of the same power (known in QM as interpretations) are equivalent.
↑ comment by witzvo · 2012-06-10T20:10:59.269Z · LW(p) · GW(p)
Personally, I advocate "no interpretation", in a sense "no ontology should be assigned to a mere interpretation".
Can you elaborate on this? (I'm not voting it down, yet anyway; but it has -3 right now)
I'm guessing that your point is that seeing and thinking about experimental results for Themselves is more important than telling stories about them, yes?
↑ comment by Grognor · 2012-06-10T18:39:06.485Z · LW(p) · GW(p)
You could go with what Everett wanted to call it in the first place, the relative state interpretation.
To answer your "Edit" question, no, the relative state interpretation does not include probabilities as fundamental.
Replies from: witzvo↑ comment by witzvo · 2012-06-10T20:16:10.807Z · LW(p) · GW(p)
You could go with what Everett wanted to call it in the first place, the relative state interpretation.
Thanks! Getting back to original sources has always been good for me. Is that "Relative state" formulation of quantum mechanics?
↑ comment by RolfAndreassen · 2012-06-10T15:59:16.372Z · LW(p) · GW(p)
I think it is necessary to exercise some care in demanding probabilities from QM. Note that the fundamental thing is the wave function, and the development of the wave function is perfectly deterministic. Probabilities, although they are the thing that everyone takes away from QM, only appear after decoherence, or after collapse if you prefer that terminology; and we Do Not Know how the particular Born probabilities arise. This is one of the genuine mysteries of modern physics.
↑ comment by witzvo · 2012-06-10T06:49:25.991Z · LW(p) · GW(p)
I was reflecting on this, and considering how statistics might look to a pure mathematician:
"Probability distribution, I know. Real number, I know. But what is this 'rolling a die'/'sampling' that you are speaking about?"
Honest answer: Everybody knows what it means (come on man, it's a die!), but nobody knows what it means mathematically. It has to do with how we interpret/model the data that we see that comes to us from experiments, and the most philosophically defensible way to give these models meaning involves subjective probability.
"Ah so you belong to that minority sect of Bayesians?"
Well, if you don't like Bayesianism you can give meaning to sampling a random variable X=X(\omega) by treating the "sampled value" x as a peculiar notation for X(\omega), and if you consider many such random variables, the things we do with x often correspond to theorems for which you could prove that a result happens with high probability using the random variables.
"Hmm. So what's an experiment?"
Sigh.
Replies from: witzvo↑ comment by witzvo · 2012-06-10T16:50:10.179Z · LW(p) · GW(p)
I was reflecting on this, and considering how statistics might look to a pure mathematician: "Probability distribution, I know. Real number, I know. But what is this 'rolling a die'/'sampling' that you are speaking about?"
Reflecting some more here (I hope this schizophrenic little monologue doesn't bother anyone), I notice that none of this would trouble a pure computer scientist / reductionist:
"Probability? Yeah, well, I've got pseudo-random number generators. Are they 'random'? No, of course not, there's a seed that maintains the state, they're just really hard to predict if you don't know the seed, but if there aren't too many bits in the seed, you can crack them. That's happened to casino slot machines before; now they have more bits."
"Philosophy of statistics? Well, I've got two software packages here: one of them fits a penalized regression and tunes the penalty parameter by cross validation. The other one runs an MCMC. They both give pretty similarly useful answers most of the time [on some particular problem]. You can't set the penalty on the first one to 0, though, unless n >> log(p), and I've got a pretty large number of parameters. The regression code is faster [on some problem], but the MCMC let's me answer more subtle questions about the posterior.
Have you seen the Church language or Infer.Net? They're pretty expressive, although the MCMC algorithms need some tuning."
Ah, but what does it mean when you run those algorithms?
"Mean? Eh? They just work. There's some probability bounds in the machine learning community, but usually they're not tight enough to use."
[He had me until that last bit, but I can't fault his reasoning. Probably Savage or de Finnetti could make him squirm, but who needs philosophy when you're getting things done.]
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-06-10T17:33:21.649Z · LW(p) · GW(p)
who needs philosophy when you're getting things done
Well, among others, someone who wonders whether the things I'm doing are the right things to do.
Replies from: witzvo↑ comment by witzvo · 2012-06-10T05:33:11.619Z · LW(p) · GW(p)
Thanks.
Edit:
'Observation' is a shorthand (for historical reasons) for 'interaction with a different system', for example a detector or a human; but a rock will do as well.
I'm still confused. This seems to imply that there is no physical meaning to the term "observation," only a meaning relative to whatever model we're entertaining in a given instance. Specifically (as far as I know) there's only one system of relevance, the Universe (or the Universe of Universes, if multiple worlds stuff means anything and we insist on ruining another perfectly clear English word), so it can't interact with a different system except from the point of view of a particular mathematical model of a subset of that system. Edit: or is the word system a technical term too. Sigh.
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-10T15:48:26.734Z · LW(p) · GW(p)
Indeed, your point is well taken; it is precisely this sort of argument that makes the MWI (sorry if you dislike the phrase!) attractive. If we prepare an electron in a superposition of, say, spin-up and spin-down, then it makes good sense to say that the electron eventually interacts with the detector, or detector-plus-human, system. But hang on, how do we know that the detector doesn't then go into a superposition of detecting-up and detecting-down, and the human into a superposition of seeing-the-detector-saying-up and seeing-the-detector-saying-down? Well, we don't experience a superposition, but then we wouldn't; we can only experience one brain state at a time!
Push this argument out to the whole universe and, as you rightly say, there's no further system it can interact with; there's no Final Observer to cause the collapse. (Although I've seen Christians use this as an argument for their god.) So the conclusion seems to be that there is no collapse, there's just the point where the human's wave function splits into two parts and we are consciously aware either of the up or down state. Now, there's one weakness to this: It is really not clear why, if this is the explanation, we should get the Born probabilities.
So, to return to the collapse postulate, one popular theory is that 'observation' means "the system in superposition becomes very massive": In other words, the electron interacts with the detector, and the detector-plus-electron system is in a superposition; but of course the detector is fantastically heavy on the scale of electrons, so this causes the collapse. (Or to put it differently, collapse is a process whose probability per unit time goes asymptotically to one as the mass increases.) In other words, 'observation' is taken as some process which occurs in the unification of QM with GR. This is a bit unsatisfactory in that it doesn't account for the lack of unitarity and what-have-you, but at least it gives a physical interpretation to 'observation'.
Replies from: witzvo↑ comment by witzvo · 2012-06-10T17:01:55.563Z · LW(p) · GW(p)
Indeed, your point is well taken; it is precisely this sort of argument that makes the MWI (sorry if you dislike the phrase!) attractive.
Yay! The rest of your argument seems sensible, but I'm too giddy to really understand it right now. I'll just ask this: can you point me to a technical paper (Arxiv is fine) where they explain, in detail, exactly how they get a certain electron "in a superposition of, say, spin-up and spin-down"?
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-10T18:53:28.676Z · LW(p) · GW(p)
Well, I don't know that I need to point you to arxiv, because I can describe the process in two sentences. Take a beam of electrons and pass it through a magnetic field which splits it into two beams, one going left and one going right. The ones which went left are spin-left, or to put it differently, they are spin-up with respect to the left-right axis; conversely the ones that went right have the opposite spin polarisation on that axis. Now rotate your axis ninety degrees; the electrons in both beams are in a perfect up-down superposition with respect to the new axis. If you rotate the axis less than ninety degrees you will get a different superposition.
Replies from: witzvo, Alicorn↑ comment by witzvo · 2012-06-10T20:31:55.863Z · LW(p) · GW(p)
describe the process in two sentences.
Well, that's helpful, but of course, I don't know how you know that the electrons have such and such spin or what superposition has to do with anything. Neither could I reproduce the experiment (someone competent could, I'm sure). Maybe there was a first experiment where they did this and spin was discovered?
EDIT: anyway, I'm tapping out of here and will check out the sequences. Thanks All
Replies from: Dreaded_Anomaly↑ comment by Dreaded_Anomaly · 2012-06-10T22:27:03.870Z · LW(p) · GW(p)
I don't know how you know that the electrons have such and such spin
Electrons have both electric charge and spin (which is a form of angular momentum), and in combination, these two properties create an intrinsic magnetic moment. A magnetic field exerts torque on anything with a magnetic moment, which causes the electron to precess if it is subjected to such a field. Because spin is quantized and has only two possible values for electrons (+1/2 or -1/2), they will only precess in two discrete ways. This can be used to separate the electrons by their spin values. The first experiment to do this was the Stern-Gerlach experiment, a classic in the early development of QM, and often considered to be the discovery of spin.
Replies from: witzvo↑ comment by Alicorn · 2012-06-10T19:20:42.643Z · LW(p) · GW(p)
That was four sentences! D:
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-10T19:39:16.576Z · LW(p) · GW(p)
Four is equal-ish to two for large values of two, at least in the limit where four is small. Besides, the last sentence is a comment, not a description of the process, so it doesn't count. :)
↑ comment by Luke_A_Somers · 2012-06-09T11:39:28.209Z · LW(p) · GW(p)
The different cases of an observation are different components of the wavefunction (component in the vector sense, in a approximately-infinite dimensional space called Hilbert Space). Observation is the point where the different cases can never come back together and interfere. This normally happens because two components differ in ways that are so widespread that only a thermodynamically small (effectively 0) component of each of them will resolve and contribute to interference against the other.
This process is called Decoherence.
Replies from: witzvo↑ comment by witzvo · 2012-06-09T20:01:11.638Z · LW(p) · GW(p)
This normally happens because two components differ in ways that are so widespread that only a thermodynamically small (effectively 0) component of each of them will resolve and contribute to interference against the other.
What? I'm looking for a specific experimental condition where collapse happens and where it doesn't. E.g. suppose an electron (or rather the waveform that represents it) is impinging on a sheet of some fluorescent material. I'm guessing it hasn't collapsed yet, right? Then the waveform interacts with the sheet and causes a specific particle of the sheet it to eject a photon. Is that collapse? Or does collapse not happen until some "observer" comes along? Or is collapse actually more subtle and can be partial?
Replies from: Luke_A_Somers, witzvo↑ comment by Luke_A_Somers · 2012-06-11T17:18:10.988Z · LW(p) · GW(p)
Then the waveform interacts with the sheet and causes a specific particle of the sheet it to eject a photon. Is that collapse?
The waveform interacts with the sheet such that a small part of many many different parts of the sheet interact, and only exactly one in each case. Since it's fluorescent, and not simply reflective, the time scale of the rerelease is finely dependent on local details, and going to wash out any reasonable interference pattern anyway.
This means that it is thermodynamically unlikely for these different components to 'come back together' so they could interfere. That's also when it loses its long-range correlations, which is the mathematical criterion for decoherence.
Due to the baggage, I personally avoid the term 'collapse', but if you're going to use it, then it's attached to the process of decoherence. Decoherence can be gradual, while 'collapse' sounds abrupt.
A partially decoherent system would be one where you have a coherent signal passing repeatedly around a mirror track. Each lap, a little bit of the signal gets mixed due to imperfections in the mirrors. The beam becomes decreasingly coherent.
So, where in there is a collapse? Eh. It would be misleading to phrase the answer that way.
↑ comment by witzvo · 2012-06-09T20:33:43.441Z · LW(p) · GW(p)
What? I'm looking for a specific experimental condition where collapse happens and where it doesn't.
Wikipedia seems to indicate that the answer is that we don't know when or if collapse happens. This is interesting, because when I was taught quantum mechanics, the notion seemed to be "of course it happens.... when we observe it... now back to Hilbert spaces" which rather soured me on the enterprise. I don't mind Hilbert spaces by the way, I just want to know how they relate to experiment. So is wikipedia right?
Replies from: evandcomment by Risto_Saarelma · 2012-06-09T07:37:23.810Z · LW(p) · GW(p)
More of a theoretical question, but something I've been looking into on and off for a while now.
Have you ever run into geometric algebra or people who think geometric algebra would be the greatest thing ever for making the spatial calculation aspects of physics easier to deal with? I just got interested in it again through David Hestenes' article (pdf), which also features various rants about physics education. Far as I can figure out so far, it's distantly analogous to how you can use complex numbers to do coordinate-free rotations and translations on a plane, only generalizable to any number of dimensions you want.
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-09T21:24:17.137Z · LW(p) · GW(p)
Have you ever run into geometric algebra or people who think geometric algebra would be the greatest thing ever for making the spatial calculation aspects of physics easier to deal with?
I can't say I have, no. Sorry! I'm afraid I couldn't make much of the Wiki article; it lost me at "Clifford algebra". Both definitions could do with a specific example, like perhaps "Three-vectors under cross products are an example of such an algebra", supposing of course that that's true.
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2012-06-10T00:01:17.877Z · LW(p) · GW(p)
Linking to Wikipedia on an advanced math concept was probably a bit futile, those generally don't explain much to anyone not already familiar with the thing. The Hestenes article, and this tutorial article are the ones I've been reading and can sort of follow, but once they get into talking about how GA is the greatest thing ever for Pauli spin matrices, I have no idea what to make of it.
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-10T02:37:15.052Z · LW(p) · GW(p)
The tutorial article is much easier to follow, yes. Now, it's been years since I did anything with Pauli spinors, and one reason for that is that they rather turned me off theory; I could never understand what they were supposed to represent physically. This idea of seeing them as a matrix expression isomorphic to a geometric relation is appealing. Still, I couldn't get to the point of visualising what the various operations were doing; I understand that you're keeping track of objects having both scalar and vector components, but I couldn't quite see what was going on as I can with cross products. That said, it took me a while to learn that trick for cross products, so quite possibly it's just a question of practice.
comment by DanielLC · 2012-06-09T06:47:45.940Z · LW(p) · GW(p)
Why can't you build an electromagnetic version of a Tipler cylinder? Are electromagnetism and gravity fundamentally different?
How does quantum configuration space work when dealing with systems that don't conserve particles (such as particle-antiparticle annihilation)? It's not like you could just apply Schrödinger's equation to the sum of configuration spaces of different dimensions, and expect amplitude to flow between those configuration spaces.
A while ago I had a timelss physics question that I don't feel I got a satisfactory answer to. Short version: does time asymmetry mean that you can't make the timeless wave-function only have a real part?
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-09T21:18:30.014Z · LW(p) · GW(p)
Why can't you build an electromagnetic version of a Tipler cylinder? Are electromagnetism and gravity fundamentally different?
Well yes, to the best of our knowledge they are: Electromagnetic charge doesn't bend space-time in the same way that gravitational charge (ie mass) does. However, finding a description that unifies electromagnetism (and the weak and strong forces) with gravity is one of the major goals of modern physics; it could be the case that, when we have that theory, we'll be able to describe an electromagnetic version of a Tipler cylinder, or more generally to say how spacetime bends in the presence of electric charge, if it does.
How does quantum configuration space work when dealing with systems that don't conserve particles (such as particle-antiparticle annihilation)? It's not like you could just apply Schrödinger's equation to the sum of configuration spaces of different dimensions, and expect amplitude to flow between those configuration spaces.
You have reached the point where quantum mechanics becomes quantum field theory. I don't know if you are familiar with the Hamiltonian formulation of classical mechanics? It's basically a way of encapsulating constraints on a system by making the variables reflect the actual degrees of freedom. So to drop the constraint of conservation of particle number you just write a Hamiltonian that has number of particles as a degree of freedom; in fact, the number of particles at every point in position-momentum space is a degree of freedom. Then you set up the allowed interactions and integrate over the possible paths. Feynman diagrams are graphical shorthands for such integrals.
A while ago I had a timelss physics question that I don't feel I got a satisfactory answer to. Short version: does time asymmetry mean that you can't make the timeless wave-function only have a real part?
I'm afraid I can't help you there; I don't even understand why reversing the time cancels the imaginary parts. Is there a particular reason the T operator should multiply by a constant phase? That said, to the best of the current knowledge the wave function is indeed symmetric under CPT, so if your approach works at all, it should work if you apply CPT instead of T reversal.
Replies from: bogdanb↑ comment by bogdanb · 2012-09-01T10:47:46.615Z · LW(p) · GW(p)
Why can't you build an electromagnetic version of a Tipler cylinder? Are electromagnetism and gravity fundamentally different?
Well yes, to the best of our knowledge they are: Electromagnetic charge doesn't bend space-time in the same way that gravitational charge (ie mass) does. However, finding a description that unifies electromagnetism (and the weak and strong forces) with gravity is one of the major goals of modern physics; it could be the case that, when we have that theory, we'll be able to describe an electromagnetic version of a Tipler cylinder, or more generally to say how spacetime bends in the presence of electric charge, if it does.
There’s something very confusing to me about this (the emphasized sentence). When you say “in the same way”, do you mean “mass bends spacetime, and electromagnetic charge doesn’t”, or is it “EM change also bends spacetime, just differently”?
Both interpretations seem to be sort-of valid for English (I’m not a native speaker). AFAIK it’s valid English to say “a catapult doesn’t accelerate projectiles the way a cannon does”, i.e., it still accelerates projectiles but does it differently, but it’s also valid English to say “neutron stars do not have fusion in their cores the way normal stars do”, i.e., they don’t have fusion in their cores at all. (Saying “X in the same way as Y” rather than the shorter “X the way Y” seems to lean towards the former meaning, but it still seems ambiguous to me.)
So, basically, which one do you mean? From the last part of that paragraph (“if it does”), it seems that we don’t really know. But if we don’t, than why are Reissner-Nordström or Kerr-Newman black holes treated separately from Schwarzschild and Kerr black holes? Wikipedia claims that putting too much charge in one would cause a naked singularity, doesn’t the charge have to bend spacetime to make the horizon go away?
I encountered similar ambiguity problems with basically all explanations I could find, and also for other physics questions. One such question that you might have an answer to is: Do superconductors actually have really, trully, honest-to-Omega zero resistance, or is it just low enough that we can ignore it over really long time frames? (I know superconductors per se are a bit outside of your research, but I assume you know a lot more than I do due to the ones used in accelerators, and perhaps a similar question applies to color-superconducting phases of matter you might have had to learn about for your actual day job.)
Replies from: RolfAndreassen, MixedNuts, shminux↑ comment by RolfAndreassen · 2012-09-02T01:56:20.783Z · LW(p) · GW(p)
Superconductor resistance is zero to the limit of accuracy of any measurement anyone has made. In a similar vein, the radius of an electron is 'zero': That is to say, if it has a nonzero radius, nobody has been able to measure it. In the case of electrons I happen to know the upper bound, namely 10^-18 meters; if the radius was larger than that, we would have seen it. For superconductors I don't know the experimental upper limit on the resistance, but at any rate it's tiny. Additionally, I think there are some theoretical reasons, ie from the QM description of what's going on, to believe it is genuinely zero; but I won't swear to that without looking it up first.
About electromagnetic Tipler cylinders, I should have said "the way that". As far as I know, electromagnetism does not bend space.
Replies from: bogdanb↑ comment by bogdanb · 2012-09-02T07:27:18.264Z · LW(p) · GW(p)
Thank you for the limits explanation, that cleared things up.
About electromagnetic Tipler cylinders, I should have said "the way that". As far as I know, electromagnetism does not bend space.
OK, but if so then do you know the explanation for why:
1) charged black holes are studied separately, and those solutions seem to look different than non-charged black holes?
2) what does it mean that a photon has zero rest mass but non-zero mass “while moving”? I’ve seen calculations that show light beams attracting each other in some cases (IIRC parallel light beams remain parallel, but “anti-parallel” beams always converge), and I also saw calculations of black holes formed by infalling shells of radiation rather than matter.
3) doesn’t energy-matter equivalence imply that fields that store energy should bend space like matter does?
What am I missing here?
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-09-02T17:54:54.669Z · LW(p) · GW(p)
2) what does it mean that a photon has zero rest mass but non-zero mass “while moving”? I’ve seen calculations that show light beams attracting each other in some cases (IIRC parallel light beams remain parallel, but “anti-parallel” beams always converge), and I also saw calculations of black holes formed by infalling shells of radiation rather than matter.
A moving photon does not have nonzero mass, it has nonzero momentum. In the Newtonian approximation we calculate momentum as p=mv, but this does not work for photons, where we instead use the full relativistic equation E^2 = m^2c^4 + p^2c^2 (observe that when p is small compared to m, this simplifies to a rather more well-known equation), which, taking m=0, gives p = E/c.
As for light beam attracting each other, that's an electromagnetic effect described by high-order Feynmann diagrams, like the one shown here. (At least, that's true if I'm thinking of the same calculations you are.)
1) charged black holes are studied separately, and those solutions seem to look different than non-charged black holes?
3) doesn’t energy-matter equivalence imply that fields that store energy should bend space like matter does?
Both good points. I'm afraid we're a bit beyond my expertise; I'm now unsure even about the electromagnetic Tipler cylinder.
↑ comment by MixedNuts · 2012-09-01T20:52:07.307Z · LW(p) · GW(p)
Do superconductors actually have really, trully, honest-to-Omega zero resistance, or is it just low enough that we can ignore it over really long time frames?
It's for-real zero. (Source: conference La supraconductivité dans tous ses états, Palaiseau, 2011) Take a superconductive loop with a current in it and measure its resistance with a precise ohmeter. You'll find zero, which tells you that the resistance must be less than the absolute error on the ohmeter. This tells you that an electron encounters a resistive obstacle at most every few ten kilometers or so. But the loop is much smaller than that, so there can't be any obstacles in it.
Replies from: bogdanb↑ comment by bogdanb · 2012-09-02T07:12:55.689Z · LW(p) · GW(p)
It's for-real zero. (Source: conference La supraconductivité dans tous ses états, Palaiseau, 2011)
Man, that is so weird. I live in Palaiseau—assuming you’re talking about the one near Paris—and I lived there in 2011, and I had no idea about that conference. I don’t even know where in Palaiseau it could have taken place...
Replies from: MixedNuts↑ comment by Shmi (shminux) · 2012-09-07T17:23:31.025Z · LW(p) · GW(p)
Re Tipler cylinder (incidentally, discovered by van Stockum). It's one of those eternal solutions you cannot construct in a "normal" spacetime, because any such construction attempt would hit the Cauchy horizon, where the "first" closed timelike curve (CTC) is supposed to appear. I put "first" in quotation marks because the order of events loses meaning in spacetimes with CTCs. Thus, if you attempt to build a large enough cylinder and spin it up, something else will happen before the frame-dragging effect gets large enough to close the time loop. This has been discussed in the published literature, just look up references to the Tipler's papers. Amos Ori spent a fair amount of time trying to construct (theoretically) something like a time-machine out of black holes, with marginal success.
comment by [deleted] · 2012-06-16T04:28:37.122Z · LW(p) · GW(p)
What is your opinion of the Deutsch-Wallace claimed solution to the probability problems in MWI?
Also are you satisfied with decoherence as means to get preferred basis?
Lastly: do you see any problems with extending MWI to QFT (relativity issues) ?
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-17T16:52:50.796Z · LW(p) · GW(p)
What is your opinion of the Deutsch-Wallace claimed solution to the probability problems in MWI?
Now we're getting into the philosophy of QM, which is not my strength. However, I have to say that their solution doesn't appeal to that part of me that judges theories elegant or not. Decision theory is a very high-level phenomenon; to try to reason from that back to the near-fundamental level of quantum mechanics - well, it just doesn't feel right. I think the connection ought to be the other way. Of course this is a very subjective sort of argument; take it for what it's worth.
Also are you satisfied with decoherence as means to get preferred basis?
I'm not really familiar enough with this argument to comment; sorry!
Lastly: do you see any problems with extending MWI to QFT (relativity issues) ?
Nu, QM and QFT alike are not yet reconciled with general relativity; but as for special relativity, QFT is generally constructed to incorporate it from the ground up, unlike QM which starts with the nonrelativistic Schrodinger equation and only introduces Dirac at a later stage. So if there's a relativity problem it applies equally to QM. Apart from that, it's all operators in the end; QFT just generalises to the case where the number of particles is not conserved.
comment by kilobug · 2012-06-09T08:17:48.564Z · LW(p) · GW(p)
Not sure you're the right person to ask that to, but there have been two questions which bothered me for a while and I never found any satisfying answer (but I've to admit I didn't take too much time digging on them either) :
In high school I was taught about "potential energy" for gravity. When objects gain speed (so, kinetic energy) because they are attracted by another mass, they lose an equivalent amount of potential energy, to keep the conservation of energy. But what happens when the mass of an object changes due to nuclear reaction ? The mass of sun is decreasing every second, due to nuclear fusion inside the sun (I'm not speaking of particles escaping the sun gravity, but of the conversion of mass to energy during nuclear fusion). So the potential energy of the Earth and all other planets regarding to gravity is decreasing. How is this compatible with conversation of energy ? It can't be the energy released by the nuclear reaction, the fusion of hydrogen doesn't release more energy just because Earth and Jupiter are around.
Similarly for conservation issue, I always have been bothered with permanent magnet. They can move things, so they can generate kinetic energy (in metal, other magnets, ...). But where does this energy comes from ? It's stored when the magnet is created and depleted slowly as the magnet does it's work ? Or something else ?
Sorry if those are silly questions for a PhD physicist as you are, but I'm a computer scientist, not a physicist and they do bother me !
Replies from: army1987, gjm, RolfAndreassen↑ comment by A1987dM (army1987) · 2012-06-09T10:43:39.569Z · LW(p) · GW(p)
The mass of sun is decreasing every second, due to nuclear fusion inside the sun (I'm not speaking of particles escaping the sun gravity, but of the conversion of mass to energy during nuclear fusion).
IMO “conversion of mass to energy” is a very misleading way to put it. Mass can have two meanings in relativity: the relativistic mass of an object is just its energy over the speed of light squared (and it depends on the frame of reference you measure it in), whereas its invariant mass is the square root of the energy squared minus the momentum squared (modulo factors of c), and it's the same in all frames of references, and coincides with the relativistic mass in the centre-of-mass frame (the one in which the momentum is zero). The former usage has fallen out of favour in the last few decades (since it is just the energy measured with different units -- and most theorists use units where c = 1 anyway), so in recent ‘serious’ text mass means “invariant mass”, and so it will in the rest of this post.
Note that the mass of a system isn't the sum of the masses of its parts, unless its parts are stationary with respect to each other and don't interact. It also includes contributions from the kinetic and potential energies of its parts.
The reason why the Sun loses mass is that particles escape it; if they didn't, the loss in potential energy would be compensated by the increase in total energy. The mass of an isolated system cannot change (since neither its energy nor its momentum can). If you enclosed the Sun in a perfect spherical mirror (well, one which would reflect neutrinos as well), from outside the mirror, in a first approximation, you couldn't tell what's going on inside. The total energy of everything would stay the same.
Now, if the Sun gets lighter, the planets do drift away so they have more (i.e. less negative) potential energy, but this is compensated by the kinetic energy of particles escaping the Sun... or something. I'm not an expert in general relativity, and I hear that it's non-trivial to define the total energy of a system when gravity is non-negligible, but the local conservation of energy and momentum does still apply. (Is there any theoretical physicist specializing in gravitation around?)
As for 2., that's the energy of the electromagnetic field. (The electromagnetic field can also store angular momentum, which can leading to even more confusing situations if you don't realize that, e.g. the puzzle in The Feynman Lectures on Physics 2, 17-4.)
Replies from: Dreaded_Anomaly↑ comment by Dreaded_Anomaly · 2012-06-09T23:44:41.382Z · LW(p) · GW(p)
I'm not an expert in general relativity, and I hear that it's non-trivial to define the total energy of a system when gravity is non-negligible, but the local conservation of energy and momentum does still apply. (Is there any theoretical physicist specializing in gravitation around?)
Sean Carroll has a good blog post about energy conservation in general relativity.
↑ comment by gjm · 2012-06-09T10:26:31.229Z · LW(p) · GW(p)
I'm not Rolf (nor am I strictly speaking a physicist), but:
There isn't really a distinction between mass and energy. They are interconvertible (e.g., in nuclear fusion), and the gravitational effect of a given quantity of energy is the same as that of the equivalent mass.
There is potential energy in the magnetic field. That energy changes as magnets, lumps of iron, etc., move around. If you have a magnet and a lump of iron, and you move the iron away from the magnet, you're increasing the energy stored in the magnetic field (which is why you need to exert some force to pull them apart). If the magnet later pulls the lump of iron back towards it, the kinetic energy for that matches the reduction in potential energy stored in the magnetic field. And yes, making a magnet takes energy.
[EDITED to add: And, by the way, no they aren't silly questions.]
Replies from: kilobug↑ comment by kilobug · 2012-06-09T10:59:41.761Z · LW(p) · GW(p)
Hum, that's a reply to both you and army1987; I know mass and energy aren't really different and you can convert one to the other; but AFAIK (and maybe it's where I'm mistaken), if massless energy (like photons) are affected by gravity, they don't themselves create gravity. When the full reaction goes on in the Sun, fusing two hydrogen into an helium, releasing gamma ray and neutrinos in the process, the gamma ray doesn't generate gravity, and the resulting (helium + neutrino) doesn't have as much gravitational mass as the initial hydrogen did.
The same happen when an electron and a positron collide, they electron/positron did generate a gravitation force on nearby matter, leading to potential energy, and when they collide and generate gamma ray photons instead, there is no longer gravitation force generated.
Or do the gamma rays produce gravitation too ? I've pretty sure they don't... but I am mistaken on that ?
Replies from: Alejandro1, shminux↑ comment by Alejandro1 · 2012-06-09T14:48:26.984Z · LW(p) · GW(p)
Or do the gamma rays produce gravitation too ? I've pretty sure they don't... but I am mistaken on that ?
They do. In Einstein's General Relativity, the source of the gravitational field is not just "mass" as in Newton's theory, but a mathematical object called the "energy-momentum tensor", which as it name would indicate encompasses all forms of mass, energy and momentum present in all particles (e.g. electrons) and fields (e.g. electromagnetic), with the sole exception of gravity itself.
Replies from: bogdanb↑ comment by bogdanb · 2012-07-03T20:18:55.701Z · LW(p) · GW(p)
with the sole exception of gravity itself.
I’ve seen this said a couple of times already in the last few days, and I’ve seen this used as a justification for why a black hole can attract you even though light cannot escape them. But black holes are supposed to also have charge besides mass and spin. So how could you tell that without electromagnetic interactions happening through the event horizon?
Replies from: Alejandro1, None↑ comment by Alejandro1 · 2012-07-03T21:10:48.101Z · LW(p) · GW(p)
That is a good question. There is more than one way to formulate the answer in nonmathematical terms, but I'm not sure which would be the most illuminating.
One is that the electromagnetic force (as opposed to electromagnetic radiation) is transmitted by virtual photons, not real photons. No real, detectable photons escape a charged black hole, but the exchange of virtual photons between a charge inside and one outside results in an electric force. Virtual particles are not restricted by the rules of real particles and can go "faster than light". (Same for virtual gravitons, which transmit the gravitational force.) The whole talk of virtual particles is rather heuristic and can be misleading, but if you are familiar with Feynman diagrams you might buy this explanation.
A different explanation that does not involve quantum theory: Charge and mass (in the senses relevant here) are similar in that they are defined through measurements done in the asymptotic boundary of a region. You draw a large sphere at large distance from your black hole or other object, define a particular integral of (respectively) the gravitational or the electromagnetic field there, and its result is defined as the total mass/charge enclosed. So saying a black hole has charge is just equivalent to saying that it is a particular solution of the coupled Einstein-Maxwell equations in which the electromagnetic field at large distances takes such-and-such form.
Notice that whichever explanation you pick, the same explanation works for charge and mass, so the peculiarity of gravity not being part of the energy-momentum tensor that I mentioned above is not really relevant for why the black hole attracts you. Where have you read this?
Replies from: bogdanb↑ comment by bogdanb · 2012-08-05T11:42:46.563Z · LW(p) · GW(p)
Hi Alejandro, I just remembered I hadn’t thanked you for the answer. So, thanks! :-)
I don’t remember where I’ve seen the explanation (that gravity works through event horizons because gravitons themselves are not affected), it seemed wrong so I didn’t actually give a lot of attention to it. I’m pretty sure it wasn’t a book or anything official, probably just answers on “physics forums” or the like.
For some reason, I’m not quite satisfied with the two views you propose. (I mean in the “I really get it now” way, intellectually I’m quite satisfied that the equations do give those results.)
For the former, I never really grokked virtual particles, so it’s kind of a non-explanatory explanation. (I.e., I understand that virtual particles can break many rules, but I don’t understand them enough to figure out more-or-less intuitively their behavior, e.g. I can’t predict whether a rule would be broken or not in a particular situation. It would basically be a curiosity stopper, except that I’m still curious.)
For the latter, it’s simply that retreating to the definition that quickly seems unsatisfying. (Definitions are of course useful, but less so for “why?” questions.)
The only explanation I could think of that does make (some) intuitive sense and is somewhat satisfactory to me is that we can never actually observe particles crossing the event horizon, they just get “smeared”* around its circumference while approaching it asymptotically. So we’re not interacting with mass inside the horizon, but simply with all the particles that fell (and are still falling) towards it.
( : Since we can observe with basically unlimited precision that their height above the EH and vertical speed is very close to zero, I can sort of get that uncertainty in where they are around* the hole becomes arbitrarily high, i.e. pretty much every particle becomes a shell, kind of like a huge but very tight electronic orbital. IMO this also “explains” the no-hair theorem more satisfyingly than the EH blocking interactions. Although it does get very weird if I think about why should they seem to rise as the black hole grows, which I just dismiss with “the EH doesn’t rise, the space above it shrinks because there are more particles pulling on it”, which is probably not much more wrong than any other “layman” explanation.)
Of course, all this opens a different** can of worms, because it’s very unintuitive that particles should be eternally suspended above an immaterial border that is pretty much defined as no-matter-how-hard-you-try-you'll-still-fall-through-it. But you can’t win them all, and anyway it’s already weird that falling particles see something completely different, and for some reason relativity always seemed to me more intuitive than quantum physics, no matter how hairy it gets.
(**: Though a more accurate metaphor would probably be that it opens the same can of worms, just on a different side of the can...)
Replies from: Alejandro1, Alejandro1↑ comment by Alejandro1 · 2012-08-08T03:11:24.383Z · LW(p) · GW(p)
OK, here is another attempt at explanation; it is a variation of the second one I proposed above, but in a way that does not rely on arguing by definition.
Imagine the (charged, if you want) star before collapsing into a black hole. If you have taken some basic physics courses, you must know that the total mass and charge can be determined by measurements at infinity: the integral of the normal component of the electric field over a sphere enclosing the star gives you the charge, up to a proportionality constant (Gauss's Law), and the same thing happens for the gravitational field and mass in Newton's theory, with a mathematically more complicated but conceptually equivalent statement holding in Einstein's.
Now, as the star begins to collapse, the mass and charge results that you get applying Gauss's Law at infinity cannot change (because they are conserved quantities). So the gravitational and electromagnetic fields that you measure at infinity do not change either. All this keeps applying when the black hole forms, so you keep feeling the same gravitational and electric forces as you did before.
Replies from: bogdanb↑ comment by bogdanb · 2012-08-10T19:23:55.673Z · LW(p) · GW(p)
Thanks for your perseverance :-)
Yeah, you’re right, putting it this way at least seems more satisfactory, it certainly doesn’t trigger the by-definition alarm bells. (The bit about mass and charge being conserved quantities almost says the same thing, but I think the fact that conservation laws stem from observation rather than just labeling things makes the difference.)
However, by switching the point of view to sphere integrals at infinity it sort of side-steps addressing the original question, i.e. exactly what happens at the event horizon such that masses (or charges) inside it can still maintain the field outside it in such a state that the integral at infinity doesn’t change. Basically, after switching the point of view the question should be how come those integrals are conserved, after the source of the field is hidden behind an event horizon?
(After all, it takes arbitrarily longer to pass a photon between you and something approaching an EH the closer it gets, which is sort of similar to it being thrown away to infinity the way distant objects “fall away” from the observable universe in a Big Rip, it doesn’t seem like there is a mechanism for mass and charge to be conserved in those cases.)
Replies from: shminux↑ comment by Shmi (shminux) · 2012-08-10T22:36:43.244Z · LW(p) · GW(p)
how come those integrals are conserved, after the source of the field is hidden behind an event horizon?
First, note that there are no sources of gravity or of electromagnetism inside a black hole. Contrary to popular belief, black holes, like wormholes, have no center. In fact, there is no way to tell them apart from outside.
Second, electric field lines are lines in space, not spacetime, so they are not sensitive to horizons or other causal structures.
it takes arbitrarily longer to pass a photon between you and something approaching an EH
This is wrong as stated, it only works in the opposite direction. It takes progressively longer to receive a photon emitted at regular intervals from someone approaching a black hole. Again, this has nothing to do with an already present static electric field.
Replies from: bogdanb↑ comment by bogdanb · 2012-08-11T13:11:06.066Z · LW(p) · GW(p)
how come those integrals are conserved, after the source of the field is hidden behind an event horizon?
First, note that there are no sources of gravity or of electromagnetism inside a black hole. Contrary to popular belief, black holes, like wormholes, have no center.
For your second sentence, I sort of get that; there’s no point one can travel to that satisfies any “center” property; the various symmetries would have a center on finitely-curved spacetime, but for a black hole that area gets stretched enough that you can only define the “center” as a sort of limit (as far as I can tell, you can define the direction to it, it’s just infinitely far away no matter where you start from—technically, the direction to it becomes “in the future” once the EH forms, right?). However, I didn’t say “center”, I said just “behind the EH”. “Once” a particle “crosses” it already seems as it should no longer have an influence to the outside.
Basically, intuition says that we should see the mass (or charge, to disentangle the generated field from the spacetime) sort of disappear once it crosses. Time slowing near the EH would help intuition because it suggests we’d never see the particle cross (thus, we always see a charge generating the field we’re measuring), but we’d see it redshift (signals about it moving take longer to arrive, thus the field becomes closer to static), it’s just that I’m not sure I’m measuring that time from the right reference frame.
it takes arbitrarily longer to pass a photon between you and something approaching an EH
This is wrong as stated, it only works in the opposite direction. It takes progressively longer to receive a photon emitted at regular intervals from someone approaching a black hole.
OK, wait a minute. Are you saying that if a probe falls to a BH, a laser on the probe sends pulses every 1s (by its clock), and a laser on my orbiting Science Vessel shines a light on it ever 1s (by my clock), I’ll see the probe’s pulses slow down, but my reflected pulses will return with 1Hz, just redshifted further (closer to a static field) the closer the probe falls? That seems weird, but it might be so, my intuition kind of groans for these setups.
But there must be some formulation around those lines that works, I’m just too in love with my “smearing” intuition. And I really feel a local explanation is needed, the integral at infinity basically only explains the mass of the black hole (how strongly it pulls), not its position (where it pulls towards).
I’m having a bit of trouble to explain my conflicting intuition, because stretching space affects both distance and redshift. If I understand correctly, the closer something is to an EH (as measured in our external-but-finite-distance-away reference frame), the further it is redshifted. So we can’t see it crossing pretty much because it’s too dark. But, in our reference frame, does it seem to be still approaching the EH, or did it also seem to stop above it before disappearing due to redshift? Another formulation: denoting d the distance between the two masses, D the Schwarzschild radius of the combined masses, and R the redshift of signals sent by the probe mass, all measured in our reference frame, outside but at a finite distance from the experiment, my understanding is that R(d) goes to infinity as d nears D from above; however, what is d(t) doing in that vicinity, is it nearing zero or going to (negative) infinity as well? Remember, d and t are measured in our reference frame. If we get ridiculously better instruments from Omega, can we observe the “impact” further into the future, or just closer to a fixed point. E.g., say our old instruments can measure until 11:59, afterwards the redshift is too much. If we get arbitrarily better ones, do we get to see until, say, 12:36, or is there a limit like 12:00 that you can approach but can’t cross no matter how good your telescopes are, and we get to see, say, 11:59.9 with d'(t) = 0.9 c, 11:59.99 with d'(t) = 0.99c and so on?
Consider the following mental experiment: A test particle of mass m falls towards a black hole of mass M. (For extra points, the test particle could be a small black hole, itself.)
At t_0, the two masses are a distance away, and the gravitational field (or electric one, if the BH and test mass are charged), when tested closely enough but not “touching” any of the two masses, looks like that generated by two point masses at a certain distance.
At t_h, the test particle is very close to the EH of the big black hole. (This is somewhat easier to define for a mini-blackhole as a test particle, just say that the distance between their event horizons is smaller than an epsilon.) So, at t_h, by probing closely but not too close, we should see a field looking like one generated by two point particles, at a distance a bit larger than the Schwarzschild radius of their combined masses.
At some point t_H > t_h, we should see just a black hole of mass (M+m), and the no hair theorem says that (if we don’t go too near to the EH, but definitely closer than infinity) it should look like the field is generated by a point mass of (M+m) at the center of gravity of the original system. Looking carefully a bit closer, we should see a slightly larger EH around that (more-or-less imaginary) point.
But what happens between t_h and t_H? I see no reason for discontinuity, so that means that we should be able to see two point masses getting closer and touching. (Or, more precisely, by measuring the field at a finite distance during that interval, we should see a (non-static) field that looks like one generated by two point masses getting closer and closer until they touch.)
Within the “smearing” intuitive view, that works out quite nicely: we see two point masses going nearer and nearer. But the closer the test mass is to the EH, the more its position is “smeared” around the BH. Basically the field we measure continuously keeps looking like two point masses approaching, but we can no longer tell where each is (the orientation of the line connecting the two points becomes unspecified as the line’s extremities rotate at lightspeed), so rotational momentum pretty much becomes pure spin.
In other words, between t_0 and t_h the test mass spins faster and faster, such that at t_crossing it rotates at lightspeed, one of the space dimensions becomes time, and we (already) no longer can tell where it is around the BH “center”. Which means that at “t_crossing” the gravitational field already has only mass and spin.
(Side-note 1: I sort of see two individually non-rotating black holes as having spherical EHs, and a rotating one as having an ellipsoid one, further away from spherical the faster it rotates. The above thought experiment asks more-or-less how do two spheres become an ellipsoid (or two ellipsoids become a bigger, less spherical one) when two BH merge. In my intuition, the two spheres simply rotate so far around one another we can no longer tell which is which, sort of blurring together. This seems very intuitive, although it gets a bit complicated to explain what happens when the total angular momentum happens to be zero, and I get nothing trying to explain what happens when a two–non-rotating–BHs system with zero total angular momentum collapses. I.e., if a non-rotating test black-hole falls straight into a non-rotating black hole, resulting in a bigger non-rotating black-hole, I don’t get at all how we get from observing a field that looks like it’s generated by two point masses to one that looks generated by just one, with no rotation. Which does suggest that what my intuition matches the rotating case just by accident.)
(Side-note 2: I was under the impression that time slowing down near the horizon, such that we never see anything cross, meant we “see” things hovering just above it, redshift making it just harder to see. But given that we see big black holes and astronomers say they became bigger in time, and your comment, it might have meant just that they disappear through red-shift before we can see them cross. Is that so? Is the positional uncertainty semi-explanation my intuition feeds me total poppy-cock, with just the “infinite” rotation speed being the only reason for the “hairs” disappearing? If so, that sort of kills my hope that it would make evaporation easier to swallow—tunneling from being frozen and red-shifted just above the horizon, and more importantly carrying back information, seems more intuitive than virtual particles becoming real just because of space flowing fast enough around them, which AFAIK is the usual explanation and doesn’t explain at all what’s going on with entropy.)
Replies from: shminux↑ comment by Shmi (shminux) · 2012-08-11T18:37:07.051Z · LW(p) · GW(p)
TL:DR :)
I recommend learning the Penrose space-time diagrams, they make things intuitive.
↑ comment by Alejandro1 · 2012-08-06T03:26:28.350Z · LW(p) · GW(p)
I'm sorry that my explanations didn't work for you; I'll try to think of something better :).
Meanwhile, I don't think it is good to think in terms of matter "suspended" above the event horizon without crossing it. It is mathematically true that the null geodesics (lightray trajectories) coming from an infalling trajectory, leaving from it over the finite proper time period that it takes for it to get to the event horizon, will reach you (as a far-away observer) over an infinite range of your proper time. But I don't think much of physical significance follows from this. There is a good discussion of the issue in Misner, Thorne and Wheeler's textbook: IIRC, a calculation is outlined showing that, if we treat the light coming from the falling chunk of matter classically, its intensity is exponentially suppressed for the far-away observer over a relatively short period of time, and if we treat it in a quantum way, there are only a finite expected amount of photons received, again over a relatively short time. So the "hovering matter" picture is a kind of mathematical illusion: if you are far away looking at falling matter, you actually do see it disappear when it reaches the event horizon.
↑ comment by [deleted] · 2012-07-03T20:52:37.242Z · LW(p) · GW(p)
Interesting question, I never though about if there is any way to test a black holes charge. My guess is that we only can assume if it is there based on the theory right now
Replies from: None, wedrifid↑ comment by [deleted] · 2012-07-03T21:05:44.940Z · LW(p) · GW(p)
found a relevant answer at http://www.astro.umd.edu/~miller/teaching/questions/blackholes.html "black holes can have a charge if they eat up too many protons and not enough electrons (or vice versa). But in practice this is very unusual, since these charges tend to be so evenly balanced in the universe. And then even if the black hole somehow picked up a charge, it would soon be neutralized by producing a strong electric field in the surrounding space and sucking up any nearby charges to compensate. These charged black holes are called "Reissner-Nordstrom black holes" or "Kerr-Newman black holes" if they also happen to be spinning." -Jeremy Schnittman
↑ comment by wedrifid · 2012-07-03T21:20:25.716Z · LW(p) · GW(p)
Interesting question, I never though about if there is any way to test a black holes charge.
Calculate the black hole's mass. Put a charged particle somewhere in the vicinity of the black hole. Measure acceleration. Do math.
Replies from: None, JulianMorrison↑ comment by [deleted] · 2012-07-03T22:15:37.832Z · LW(p) · GW(p)
That much is obvious given an assumption that charged fields work proberly through a blackhole, which was not obvious particularily given aljandro's statement. After confirming that the charge of a blackhole can interact with being impeded by the singularity, there are a lot of obvious ways to check the charge
↑ comment by JulianMorrison · 2012-07-03T21:30:06.702Z · LW(p) · GW(p)
Will that work? Or to put it particle-ish-ly, how is the information about a charge inside an event horizon able to escape?
↑ comment by Shmi (shminux) · 2012-06-09T19:33:00.874Z · LW(p) · GW(p)
Or do the gamma rays produce gravitation too ? I've pretty sure they don't... but I am mistaken on that ?
There is a lot of potential (no pun intended) for confusion here, because the subject matter is so far from our intuitive experience. There is also the caveat "as far as we know", because there have not been measurements of gravity on the scale below tenths of a millimeter or so.
First, in GR gravity is defined as spacetime (not just space) curvature, and energy-momentum (they are linked together in relativity) is also spacetime curvature. This is the content of the Einstein equation (energy-momentum tensor = Ricci curvature tensor, in the units where 8piG/c^2=1).
In this sense, all matter creates spacetime curvature, and hence gravity. However, this gravity does not have to behave in the way we are used to. For example, it would be misleading to say that, for example, a laser beam attracts objects around it, even though it has energy. Let me outline a couple of reasons, why. In the following, I intentionally stay away from talking about single photons, because those are quantum objects, and QM and GR don't play along well.
Before a gravitational disturbance is felt, it has to propagate toward the detector that "feels" it. For example, suppose you measure the (classical) gravitational field from an isolated super-powerful laser before it fires. Next, you let it fire a short burst of light. What does the detector feel and when? If it is extremely sensitive, it might detect some gravitational radiation, mostly due to the laser recoiling. Eventually, the gravitational field it measures will settle down to the new value, corresponding to the new, lower, mass of the laser (it is now lighter because some of its energy has been emitted as light). The detector will not feel much, if any, "pull" toward the beam of light traveling away from it. The exact (numerical) calculation is extremely complicated and requires extreme amounts of computing power, and has not been done, as far as I know.
What would a detector measure when the beam of light described above travels past it? This is best visualized by considering a "regular" massive object traveling past, then taking a limit in which its speed goes to the speed of light, but its total energy remains constant (and equal to the amount of energy of the said laser beam). This means that its rest mass is reduced as its speed increases. I have not done the calculation, but my intuition tells me that the effects are reduced as speed increases, because both the rest mass and the amount time the object remains near the detector go down dramatically. (Note that the "relativistic mass" stays the same, however.)
There is much more to say about this, but I've gone on for too long as it is.
EDIT: It looks like there is an exact solution for a beam of light, called Bonnor beam. This is somewhat different from what I described (a short pulse), but the interesting feature is that two such beams do not attract. This is not very surprising, given that the regular cosmic strings do not attract, either.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-06-09T20:11:52.714Z · LW(p) · GW(p)
8piG
How comes no-one has come up with a symbol (say G-bar) for that, as they did with ħ for h/2pi when they realized ħ was a more ‘natural’ constant than h? (or has anybody come up with a single symbol for 8piG?)
Replies from: Alejandro1, shminux↑ comment by Alejandro1 · 2012-06-10T00:44:44.126Z · LW(p) · GW(p)
The notation kappa = 8 pi G is sometimes used, e.g. in this Wiki article. However, it is much less universal than ħ.
↑ comment by Shmi (shminux) · 2012-06-09T22:05:23.510Z · LW(p) · GW(p)
There aren't many people who do this stuff for a living (as is reflected in exactly zero Nobel prizes for theoretical work in relativity so far), and different groups/schools use different units (most popular is G=1, c=1), so there is not nearly as much pressure to streamline the equations.
↑ comment by RolfAndreassen · 2012-06-09T22:41:06.848Z · LW(p) · GW(p)
They are not silly questions, I asked them myself (at least the one about the Sun) when I was a student. However, it seems army1987 got there before I did. So, yep, when converting from mass-energy to kinetic energy, the total bending of spacetime doesn't change. Then the photon heads out of the solar system, ever-so-slightly changing the orbits of the planets.
As for magnets, the energy is stored either in their internal structure, ie the domains in a classic iron magnet; or in the magnetic field density. I think these are equivalent formulations. An interesting experiment would be to make a magnet move a lot of stuff and see if it got weaker over time, as this theory predicts.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-06-10T19:26:08.572Z · LW(p) · GW(p)
An interesting experiment would be to make a magnet move a lot of stuff and see if it got weaker over time, as this theory predicts.
If you're not thinking of moving a lot of stuff at once, every time you pull a piece of the stuff back off the magnet where it was before you're returning energy back to the system, so the energy needn't eventually be exhausted. (Though I guess it still eventually be if the system is at a non-zero temperature, because in each cycle some of the energy could be wasted as heat.)
comment by timtyler · 2012-06-09T01:29:52.471Z · LW(p) · GW(p)
Please tell us what you make of http://en.wikipedia.org/wiki/Quantum_Darwinism
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-09T02:13:35.371Z · LW(p) · GW(p)
Well, it's theory, which is not my strong suit; these are just first impressions on casual perusal. It is not obvious nonsense. It is not completely clear to me what is the advantage over plain Copenhagen-style collapse. It makes no mention of even special relativity - it uses the Schrodinger rather than Dirac equation; but usually extending to Dirac is not very difficult. The approach of letting phases have significance appeals to me on the intuitive level that finds elegance in theories; having this unphysical variable hanging about has always annoyed me. In Theorem 3 it is shown that only the pointer states can maintain a perfect correlation, which is all very well, but why assume perfect correlation? If it's one-minus-epsilon, then presumably nobody would notice for sufficiently small epsilon. Overall, it's interesting but not obviously revolutionary. But really, you want a theorist for this sort of thing.
Replies from: timtyler↑ comment by timtyler · 2012-06-09T11:09:10.935Z · LW(p) · GW(p)
Thnaks. I gave it a tentative thumbs up too.
comment by mfb · 2012-07-12T21:53:08.300Z · LW(p) · GW(p)
Just wondering: Apart from the selection that D should come from the primary vertex, did you do anything special to treat D from B decays? I found page 20, but that is a bit unspecific in that respect. Some D° happen to fly nearly in the same direction as the B-meson, and I would assume that the D°/slowpi combination cannot resolve this well enough.
(I worked on charm mixing, too, and had the same issue. A reconstruction of some of these events helped to directly measure their influence.)
comment by Cyan · 2012-07-04T01:18:13.762Z · LW(p) · GW(p)
Is there any redeeming value in this article by E.T. Jaynes suggesting that free electrons localize into wave packets of charge density?
The idea, near as I can tell, is that the spreading solution of the wave equation is non-physical because "zitterbewegung", high-frequency oscillations, generate a net-attractive force that holds the wave packet together. (This is Jaynes holding out the hope of resurrecting Schrödinger's charge density interpretation of the wave equation.)
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-07-09T04:17:10.114Z · LW(p) · GW(p)
I don't have time to read it right now, but I suggest that unless it accounts for how a charge density can be complex, it doesn't really help. The problem is not to come up with some physical interpretation of the wave mechanics; if that were all, the problem would have been solved in the twenties. The difficulty is to explain the complex metric.
comment by DanielLC · 2012-06-17T04:18:57.655Z · LW(p) · GW(p)
I'm confused about part of quantum encryption.
Alice sends a photon to Bob. If Eve tries to measure the polarization, and measures it on the wrong axis, there's a chance Bob won't get the result Alice sent. From what I understand, if Eve copies the photon, using a laser or some other method of getting entangled photons, and she measures the copied photon, the same result will happen to Bob. What happens if Eve copies the photon, and waits until Bob reads it before she does?
Also, you referred to virtual particles as a convenient fiction when responding to someone else. I assumed that they were akin to a particle being in a place with more potential energy than there is energy in a system during quantum tunneling. The particle is real. It's just that due to the fact that the kinetic energy is negative, it behaves in a way that makes the waveform small at any real distance. Was I completely off base?
Also, should I have just edited my old post instead of adding a new one?
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-17T16:38:24.231Z · LW(p) · GW(p)
What happens if Eve copies the photon, and waits until Bob reads it before she does?
Not my field, but it seems to me that it should be the same thing that happens if Bob tries to read the photon after Eve has already done so. You can only read the quantum information off once. Now, an interesting question is, what happens if Eve goes off into space at near lightspeed, and reads the photon at a time such that the information "Bob has read the photon" hasn't had time to get to her spaceship? If I understand correctly, it doesn't matter! This scenario is just a variant of the Bell's-inequality experiment.
Also, you referred to virtual particles as a convenient fiction when responding to someone else. I assumed that they were akin to a particle being in a place with more potential energy than there is energy in a system during quantum tunneling. The particle is real. It's just that due to the fact that the kinetic energy is negative, it behaves in a way that makes the waveform small at any real distance. Was I completely off base?
So firstly, in quantum tunneling the particle never occupies the forbidden area. It goes from one allowed area to another without occupying the space between; hence the phrase "quantum leap". Of course this is not so difficult to imagine when you think of a probability cloud rather than a particle; if you think of a system with parts ABC, where B is forbidden but A and C are allowed, then there is at any time a zero probability of finding the particle in B, but a nonzero probability to find it in A and C. This is true even if at some earlier time you find it in A, because, so to speak, the wave function can go where the particle can't. So, yes, if you ever found the particle in B its kinetic energy would be negative, but in fact that doesn't happen. So now we come to matters of taste: The wave function does exist within B; is this a mathematical fiction, because no experiment will find the particle there, or is it real since it explains how you can find the particle at C?
Then, back to virtual particles. The mass of a virtual particle can be negative; it is really unclear to me what it would even mean to observe such a thing. Therefore I think of them as a convenient fiction. But they are certainly a very helpful fiction, so, you know, take your choice.
Also, should I have just edited my old post instead of adding a new one?
I don't think so, the number of comments here is so large that it would be very easy to miss an edit.
Replies from: DanielLC↑ comment by DanielLC · 2012-06-17T17:23:21.316Z · LW(p) · GW(p)
You can only read the quantum information off once.
Bob knows the right way to polarize it, though. If Eve tries to read it but polarizes it wrong, it would mess with the polarization of Bob's particle, so there's a chance he'd notice. If Bob polarizes it the way Alice did, and then Eve polarizes it wrong when she reads it, will Bob notice? If Bob notices, he just predicted the future. If he does not, then he can tell whether or not when Eve reads it constitutes "future", violating relativity of simultaneity.
So firstly, in quantum tunneling the particle never occupies the forbidden area.
If you solve Schroedinger's time-independent equation for a finite well, there is non-zero amplitude outside the well. If you calculate kinetic energy on that part of the waveform, it will come out negative. You obviously wouldn't be able to observe it outside the well, in the sense of getting it to decohere to a state where it's mostly outside the well, without giving it enough energy to be in that state. That's just a statement about how the system evolves when you put a sensor in it. If you trust the Born probabilites and calculate the probability of being in a configuration space with a particle mid-quantum tunnel, it will come out finite.
... it is really unclear to me what it would even mean to observe such a thing.
I don't really care about observation. It's just a special case of how the system evolves when there's a sensor in it. I want to know how virtual particles act on their own. Do they evolve in a way fundamentally different from particles with positive kinetic energy, or are they just what you get when you set up a waveform to have negative energy, and watch it evolve?
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-06-19T04:03:09.239Z · LW(p) · GW(p)
Bob knows the right way to polarize it, though. If Eve tries to read it but polarizes it wrong, it would mess with the polarization of Bob's particle, so there's a chance he'd notice. If Bob polarizes it the way Alice did, and then Eve polarizes it wrong when she reads it, will Bob notice? If Bob notices, he just predicted the future. If he does not, then he can tell whether or not when Eve reads it constitutes "future", violating relativity of simultaneity.
Good point. My initial answer wasn't fully thought through; I again have to note that this isn't really my area of expertise. There is apparently something called the no-cloning theorem, which states that there is no way to copy arbitrary quantum states with perfect fidelity and without changing the state you want to copy. So the answer appears to be that Eve can't make a copy for later reading without alerting Bob that his message is compromised. However, it seems to be possible to copy imperfectly without changing the original; so Eve can get a corrupted copy.
There is presumably some tradeoff between the corruption of your copy, and the disturbance in the original message. You want to keep the latter below the expected noise level, so for a given noise level there is some upper limit on the fidelity of your copying. To understand whether this is actually a viable way of acquiring keys, you'd have to run the actual numbers. For example, if you can get 1024-bit keys with one expected error, you're golden: Just try the key with each bit flipped and each combination of two bits flipped, and see if you get a legible message. This is about a million tries, trivial. (Even so, Alice can make things arbitrarily difficult by increasing the size of the key.) If we expected corruption in half the bits, that's something else again.
I don't know what the limits on copying fidelity actually are, so I can't tell you which scenario is more realistic.
As I say, this is a bit out of my expertise; please consider that we are discussing this as equals rather than me having the higher status. :)
If you solve Schroedinger's time-independent equation for a finite well, there is non-zero amplitude outside the well. If you calculate kinetic energy on that part of the waveform, it will come out negative. You obviously wouldn't be able to observe it outside the well, in the sense of getting it to decohere to a state where it's mostly outside the well, without giving it enough energy to be in that state. That's just a statement about how the system evolves when you put a sensor in it. If you trust the Born probabilites and calculate the probability of being in a configuration space with a particle mid-quantum tunnel, it will come out finite.
You are correct. It seems to me, however, that you would not actually observe a negative energy; you would instead be seeing the Heisenberg relation between energy and time, \Delta E \Delta t >= hbar/2; in other words, the particle energy has a fundamental uncertainty in it and this allows it to occupy the classically forbidden region for short periods of time.
I don't really care about observation. It's just a special case of how the system evolves when there's a sensor in it. I want to know how virtual particles act on their own. Do they evolve in a way fundamentally different from particles with positive kinetic energy, or are they just what you get when you set up a waveform to have negative energy, and watch it evolve?
Your original question was whether virtual particles are real; perhaps I should ask you, first, to define the term. :) However, they are at least as real as the different paths taken by the electron in the two-slit experiment; if you set things up so that particular virtual-particle energies are impossible, the observed probabilities change, jsut like blocking one of the slits.
Well, as they can have negative mass you have to assume that their gravitational interactions are, to coin a phrase, counterintuitive. (That is, even for quantum physicists! :) ) But, of course, we don't have any sort of theory for that. As far as interactions that we actually know something about go, they are the same, modulo the different mass in the propagator. (That is, the squiggly line in the Feynman diagram, which has its own term in the actual path integral; you have to integrate over the masses.)
comment by epigeios · 2012-06-12T03:28:02.570Z · LW(p) · GW(p)
I've got a lot of questions I just thought of today. I am personally hoping to think of a possible alternative model of quantum physics that doesn't need anything more than the generation 1 fermions and photons, and doesn't need the strong interaction.
- What is the reason for the existence of the theory of the charm quark (or any generation 2-3 quark)? What are some results of experiments that necessitate the existence of a charm quark?
- Which of the known hadrons can be directly observed in any way, as opposed to theorized as a mathematical in-between or as a trigger for some directly observable decay?
- Am I right in thinking that the tau lepton is only theorized in order to explain an in-between decay state? If you don't know, do you know of anything related to any other fermions (or hadrons) that only exist as a theoretical in-between?
- How were the masses of the tau lepton and the top quark determined? If the methods are different for the charm quark, how was the mass of the charm quark determined?
- Does the weak interaction cause any sort of movement, or hold anything together, or does it only act as a trigger for decay? Why is it considered a field energy?
- When detecting gamma radiation, how much background is there to extract from? Does the process of extracting from the background require performing hundreds of iterations of the experiment?
- Since you know quite a lot about it, and since the majority of my knowledge comes from Wikipedia, what does "fitting distributions in multiple dimensions" mean? What is the possibility of error of this process?
- Oh, and lastly, do you know of any chart or list anywhere that details the known possible decay paths of bosons and fermions?
That's all for now. I SO hope you can answer any of these questions; because Wikipedia can't :'( (as someone who enjoys theory, I find it annoying when Wikipedia can neither confirm nor deny my conjectures, despite the fact that the information is certainly out there somewhere, and someone knows it.)
Replies from: RolfAndreassen, RolfAndreassen↑ comment by RolfAndreassen · 2012-06-12T06:06:23.456Z · LW(p) · GW(p)
Ok, that's a lot of questions. I'll do my best, but I have to tell you that your quest is, in my opinion, a bit quixotic.
What is the reason for the existence of the theory of the charm quark (or any generation 2-3 quark)? What are some results of experiments that necessitate the existence of a charm quark?
Basically the strange quark is motivated by the existence of kaons, charm quarks by the D family of mesons (well, historically the J/psi, but I'm more familiar with the D mesons), and beauty quarks by the B family. As for truth quarks, mainly considerations of symmetry. Let's take kaons, the argument being the same for the other families. If the kaon were to decay by the strong force, it would be extremely short-lived, because it could go pretty immediately to two pions; there would certainly be no question of seeing it in a tracking detector, the typical timescale of strong decays being 10^-23 seconds. Even at lightspeed you don't get far in that time! We therefore conclude that there is some conservation principle preventing the strong decay, and that the force by which the kaon decays does not respect this conservation principle. Hence we postulate a strange quark, whose flavour (strangeness) is conserved by the strong force (so, no strange-to-up (or down) transition at strong-force speeds) but not by the weak force.
I should note that quark theory has successfully predicted the existence of particles before they were observed; you might Google "Eightfold Path" if you're not familiar with this history, or have a look at the PDG's review. (Actually, on closer inspection I see that the review is intending for working physicists familiar with the history - it's not an introduction to the Eightfold Path, per se. Probably Google would serve you better.)
Which of the known hadrons can be directly observed in any way, as opposed to theorized as a mathematical in-between or as a trigger for some directly observable decay?
For this I have to digress into cross-sections. Suppose you are colliding an electron and a positron beam, and you set up a detector at some particular angle to the beam - for example, you can imagine the detector looking straight down at the collision point:
___detector
e+ -----> collision <------- e-
Now, the cross-section (which obviously is a function of the angle) can be thought of as the probability that you'll see something in the detector. If electron and positron just glance off each other without annihilating (at relativistic speeds this can easily happen - they have to get pretty close to interact, and our control of the beams is only so good), we call that Bhabha scattering, and it has a particular cross-section structure. For obvious reasons, the cross-section is highest at small angles; that is, it is really quite unlikely for the electron and positron to dance past each other in the exact way that throws them out at a ninety-degree angle to their previous paths; but it's pretty easy for them to give each other a one-degree kick. If you calculate the cross-section at some particular angle as a function of the total beam energy, you'll see that the higher the energy, the lower the cross-section, and indeed experiment confirms this.
What if the electron and positron do annihilate, creating a virtual photon that then decays to some other pair of particles - for example, a charm-anticharm pair? Well, again, the cross-section is highest near the beam (basically to conserve the angular momentum - you have to do spin math) and decreases with energy.
So we have this cross-section that decreases monotonically with energy. However, as you run your beam energy up, at very specific energies you will see a sharp increase and drop-off, in a classic Breit-Wigner shape. In other words, at some particular energy it suddenly becomes much more likely that your decay products get kicked away from the beam. Why is that? We refer to these bumps in the spectrum as resonances, and explain them by appealing to bound states - particles, in other words. What happens is that with an intermediate bound state, there are additional Feynman paths that open up between the initial state "electron and positron" and the final state "hit in detector at angle X". Additional paths through parameter space gives you additional probability unless you're very unlucky with the phases, hence the bump in the cross-section - the final state becomes more likely. (Additionally, for reasons of spin math that I won't go into here, the decay products from a bound state of two quarks are produced much more isotropically than back-to-back quark-antiquark pairs.)
Here's a different way of looking at it. Suppose you have a detector that encloses the collision space, so you can reconstruct most of the decay products; and you decide to take all pion pairs and calculate "If these two particles came from a common decay, what was the mass of the particle that decayed?" Then this spectrum will basically be flat, but you will get an occasional peak at specific masses. Again, we explain this by appeal to a bound state.
It occurs to me that this may not actually differ from what you call "mathematical in-betweens"; I have answered as though this phrase refers to virtual particles, which are indeed a bit of a convenient fiction. Anyway, this is why we believe in the various hadrons and mesons.
(I'm getting "comment too long" errors; splitting my answer here.)
↑ comment by RolfAndreassen · 2012-06-12T06:06:35.583Z · LW(p) · GW(p)
I had to split my answer in two, and clumsily posted them in the wrong order - some of this refers to an 'above' which is actually below. I suggest reading in chronological rather than page order. :)
Am I right in thinking that the tau lepton is only theorized in order to explain an in-between decay state?
Well no, you get a specific resonance in hadron energy spectra, as described above.
If you don't know, do you know of anything related to any other fermions (or hadrons) that only exist as a theoretical in-between?
There's the notorious sigma and kappa resonances, which are basically there only to explain a structure in the pion-pion and pion-kaon scattering spectrum. Belief in these as particles proper, rather than some feature of the dynamics, is not widespread outside the groups that first saw them. (I have a photoshopped WWII poster somewhere, captioned "Is YOUR resonance needed? Unnecessary particles clutter up the Standard Model!) I see the PDG doesn't even list them in its "needs confirmation" section. I'm aware of them basically because I used them in my thesis just as a way to vary the model and see how the result varied - I had all the machinery for setting up particles, so a more-or-less fictional particle with some motivation from what others have seen was a convenient way of varying the structure.
How were the masses of the tau lepton and the top quark determined? If the methods are different for the charm quark, how was the mass of the charm quark determined?
So quark masses are a vexed subject. The problem is that you cannot catch a quark on its own, it's always swimming in a virtual soup of gluons and quarks. So all quark masses are determined, basically, by taking some model of the strong interaction and trying to back-calculate the observed hadron and meson masses. And since the strong interaction is insanely computationally intractable, you can't get a very good answer.
For the tau lepton it's rather simpler: Wait for one to decay to charged hadrons, calculate the four-momentum of the mother particle, and get the peak of the mass distribution as described above.
Does the weak interaction cause any sort of movement, or hold anything together, or does it only act as a trigger for decay?
I don't believe anyone has observed a bound state mediated purely by the weak force. In fact one of the particles in such a state would have to be a neutrino, since otherwise there would be other forces involved; and observing a neutrino is hard enough without adding the requirement that it be a bound state. However, I suppose that in inverse-beta-decay, or neutrino capture, the weak force causes some movement at the final movement, to the extent that it's meaningful to speak of movement at these scales.
Why is it considered a field energy?
Because it can be quantised into carrier bosons, presumably.
When detecting gamma radiation, how much background is there to extract from?
This is really hard to give a general answer for. In the BaBar detector, photons are reconstructed by the EMC, the electromagnetic calorimeter. My rule of thumb for this instrument is that photons with energy less than 30 MeV are worthless; such energies can easily be faked by the electronic noise and ambient radiation. Above 100 MeV you have to be fairly unlucky for an EMC hit to be background. I don't know if this is helpful; perhaps you can give me a better idea of the context of your question?
Does the process of extracting from the background require performing hundreds of iterations of the experiment?
Again, this is really dependent on context. Can you be more specific about what sort of experiment you're asking about?
Since you know quite a lot about it, and since the majority of my knowledge comes from Wikipedia, what does "fitting distributions in multiple dimensions" mean? What is the possibility of error of this process?
Have a look at my answer to magfrump. As for errors, our search algorithm does rely on the log-probability function being reasonably smooth, and can give misleading answers if that's not true. It can get caught in local minima; we try to avoid this by starting from several different points and checking that we converge to the same place. In some cases the assumption of symmetric errors can mislead you, so we often look at asymmetric errors as well. Most insidiously, of course, you can get the physics just wrong, but right enough to mimic the data within the limits of the fit's accuracy.
Oh, and lastly, do you know of any chart or list anywhere that details the known possible decay paths of bosons and fermions?
You could try the PDG's summary tables.