What was your biggest recent surprise?
post by DataPacRat · 2012-06-09T23:57:21.851Z · LW · GW · Legacy · 36 commentsContents
36 comments
I recently flipped through the "Cartoon Guide to Physics", expecting an easy-to-understand rehash of ideas I was long familiar with; and that's what I got - right up to the last few pages, where I was presented with a fairly fundamental concept that's been absent from the popular science media I've enjoyed over the years. (Specifically, that the uncertainty principle, when expressed as linking energy and time, explains what electromagnetic fields actually /are/, as the propensity for virtual photons of various strengths to happen.) I find myself happy to try to integrate this new understanding - and at least mildly disturbed that I'd been missing it for so long, and with an increased curiosity about how I might find any other such gaps in my understanding of how the universe works.
So: what's the biggest, or most surprising, or most interesting concept /you/ have learned of, after you'd already gotten a handle on the basics?
36 comments
Comments sorted by top scores.
comment by [deleted] · 2012-06-10T18:20:30.854Z · LW(p) · GW(p)
I was surprised that it is possible to apply simple(?) signal processing techniques to extract subtle signals from a video, e.g. somebody's heartbeat.
Surprise levels:
1) I never thought of that (that there could be useful hidden signals in standard video). Their paper references a few other attempts at this.
2) If I had thought of it, or someone had mentioned the idea, I would have guessed that those signals are not strong enough to
be extracted by any method.
3) And, even if there were a signal, I would have thought it would take very powerful techniques and many assumptions (like manually annotating where you expect to see the heartbeat, etc.) to make it work.
This is required less than I'd expect. From the paper:
Replies from: Thomaswe auto- matically select, and then amplify, a band of temporal frequencies that includes plausible human heart rates.
↑ comment by Thomas · 2012-06-11T06:18:32.119Z · LW(p) · GW(p)
The same way we could obtain details from an astronomy video. One hour of a video of a distant planet might be worth of a big telescope, The long exposition time was the first step in this direction, long ago.
The current exo planet detection is another, bigger.
We simply don't yet use every information we have.
Replies from: None↑ comment by [deleted] · 2012-06-11T17:42:26.837Z · LW(p) · GW(p)
Astronomy is an interesting connection to think about wrt to this work. In astronomy, we're integrating the light received. In some sense this is dynamic, because there are small variations due to atmosphere. But the underlying signal is assumed to be static? I guess there are pulsars where we don't expect that. Maybe then people have to apply similar techniques (filtering out dynamics, e.g. from atmosphere, at frequencies far from that expected from pulsars?)
Replies from: Thomas↑ comment by Thomas · 2012-06-12T16:21:42.071Z · LW(p) · GW(p)
You, an astronomer, should always ask yourself: Giving this light pattern in time, what is the most probable source which would give me this pattern. Be it static or dynamic, whichever fits the best.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-06-12T16:39:08.152Z · LW(p) · GW(p)
The standard approach is to simulate multiple possible sources and use Bayesian techniques, such as maximum likelihood, to evaluate which ones match the data best and whether the best is a good enough fit. The waveforms matching in LIGO is one of the extremes, given how weak the potential signal is.
comment by betterthanwell · 2012-06-10T18:49:08.961Z · LW(p) · GW(p)
A very salient moment of surprise was when I realized that my mental model of a simple three-quark proton was deeply (or simply) wrong:
You may have heard that a proton is made from three quarks. Indeed here are several pages that say so. This is a lie — a white lie, but a big one. In fact there are zillions of gluons, antiquarks, and quarks in a proton. The standard shorthand, “the proton is made from two up quarks and one down quark”, is really a statement that the proton has two more up quarks than up antiquarks, and one more down quark than down antiquarks. To make the glib shorthand correct you need to add the phrase “plus zillions of gluons and zillions of quark-antiquark pairs.” Without this phrase, one’s view of the proton is so simplistic that it is not possible to understand the LHC at all.
http://profmattstrassler.com/articles-and-posts/largehadroncolliderfaq/whats-a-proton-anyway/
What still surprises me, whenever I think of it, is how we live in a such a big world, even on the smallest scales we are able to probe. And also that things like nuclei happen to be stable over long enough timescales for things like chemistry and life to occur.
Replies from: Luke_A_Somers, betterthanwell↑ comment by Luke_A_Somers · 2012-06-12T12:54:31.012Z · LW(p) · GW(p)
All of those gluons and quark-antiquark pairs are every bit as stable as the Earth's gravitational field. They're elements of the ground state for a quark.
The process of finding the ground state for a particle from its interactions, including dragging in virtual pairs to screen high field intensities around the singularity, is called Renormalization.
↑ comment by betterthanwell · 2012-06-11T09:05:51.166Z · LW(p) · GW(p)
I realized that my mental model of a simple three-quark proton was deeply (or simply) wrong.
For an explanation using more showing and less telling: Checking what's inside a proton
You’ve heard the famous statement that “a proton is made from two up quarks and a down quark”. But in this basic article, and this somewhat more advanced one, and in a recent post where I went into some details about what we know about proton structure, I’ve claimed to you that protons are chock full of particles, most of which carry a tiny fraction of the proton’s energy, and most of which are gluons, along with a substantial number of of quarks and antiquarks.
What I want to do in this article is show you evidence that the statements made about proton structure in this post are true. After all, why should you have to take my word for such things? Let’s look at some LHC data, and see how it confirms these notions.
comment by Kaj_Sotala · 2012-06-10T14:47:33.761Z · LW(p) · GW(p)
Not really related to any explicit field of study, but...
Most recently, I was surprised by the extent to which the Japanese still use faxes.
Before that, I was really surprised by the whole Planetary Resources thing. My model of the world claimed that aside for some relatively minor stuff like space tourism and such, plausible pushes to actually do something new and non-trivial in space simply do not happen, and that there would be essentially no real progress in any kind of space exploration before the Singularity. At best, there would be a new private space station in orbit, or NASA would announce a manned Mars mission that would get quietly killed by budget cuts a few years later. Having a bunch of billionaires announce a real effort to actually mine asteroids was something that made it slightly easier for me to alieve in the Singularity happening some day. Before, both asteroid mining and the Singularity used to belong to the mental category of "things that I intellectually acknowledge as possible, but which would be such huge changes to the current paradigm that on a gut level, I don't really grasp either of them happening".
Replies from: Jesper_Ostman↑ comment by Jesper_Ostman · 2012-06-10T20:20:28.286Z · LW(p) · GW(p)
Why it just something which made it easier to "alieve" (in contrast to just believing) in a singularity, or do you think this information was good evidence for updating towards that a singularity is more likely? (eg because it shows that billionaires might invest in such crazy projects)
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2012-06-11T05:01:08.614Z · LW(p) · GW(p)
I don't think it changed my beliefs about the probability of a Singularity, only my aliefs about whether "science fiction-like" events could happen.
comment by witzvo · 2012-06-10T17:18:57.972Z · LW(p) · GW(p)
I learned that I'm not crazy for having been confused by the double-speak I was taught in college about "observation" in Quantum Mechanics and that maybe there's a community where I can get straight answers to things.
According to RolfAndreassen:
'Observation' is a shorthand (for historical reasons) for 'interaction with a different system', for example a detector or a human; but a rock will do as well. I would actually suggest you read the Quantum Mechanics Sequence on this point, Eliezer's explanation is quite good.
According to Douglas_Knight,
I advocate in place of "many worlds interpretation" the phrase "no collapse interpretation."
Thanks Less Wrong.
comment by Jesper_Ostman · 2012-06-10T20:43:42.920Z · LW(p) · GW(p)
That there is reason to believe that it is "relatively easy" (say if we survive x-risk and get a good singleton within a million years) to colonize billions of galaxies. That makes the expected (ignoring possibility of discovering new useful physics, creating universes etc) hedonic utility of x-risk reduction up to some 9-orders of magnitude greater than I had previously thought.
comment by [deleted] · 2012-06-10T05:23:29.787Z · LW(p) · GW(p)
I think my biggest recent surprise was the notion that AI didn't have to be sentient to be worth taking seriously. Seemed like such a simple idea to overlook but it totally changed my worldview.
Replies from: CasioTheSane↑ comment by CasioTheSane · 2012-06-10T05:44:25.217Z · LW(p) · GW(p)
I find it striking that Google and Wolfram Alpha are more useful at helping people answer questions than greater than human intelligence AI was imagined to be in most older sci-fi novels.
comment by lukeprog · 2012-06-10T01:41:42.131Z · LW(p) · GW(p)
Not very recent, but...
I was surprised way back when I learned that we had already located some neurons which seem to encode the expected utility of possible actions. ('Utility' here isn't meant in the philosophical sense but in the neuroeconomic sense.)
I also remember being amused 1+ years ago when I did some more studying in AI and decision theory and learned that all currently described AI agents are Cartesian dualists. (This is old news 'round these parts, I know.)
Replies from: rocurley↑ comment by rocurley · 2012-06-10T07:23:51.237Z · LW(p) · GW(p)
all currently described AI agents are Cartesian dualists
I don't quite understand what you mean by that, can you elaborate?
Replies from: None↑ comment by [deleted] · 2012-06-12T22:34:52.678Z · LW(p) · GW(p)
Some AI have a limited understanding of their own bodies; they can learn kinematic models of the actuators in the robots they control or form "affordances", ideas about what kind of interactions with their environments they can effect. But very few (apparently no?) cognitive architectures or AI designs model thier minds as being algorithms executing on their computing hardware, so whatever metacognitive representation and processing they have, it's "disembodied", like old ideas of the mind being made of spooky stuff. The combination of physical bodies and spooky minds is called Cartesian Dualism after philospher René Descartes.
comment by JoshuaZ · 2012-06-10T00:58:57.168Z · LW(p) · GW(p)
Even though the composition of two rotations is a rotation on a standard sphere, the same is not true for higher dimensional spheres. Possibly even weirder, on a sphere the composition of two periodic rotations is not necessarily periodic.
Replies from: magfrump, Nisan↑ comment by Nisan · 2012-06-10T01:19:10.075Z · LW(p) · GW(p)
I don't get the first thing. Isn't SO(n) the group of rotations of the unit n-sphere?
Replies from: JoshuaZ↑ comment by JoshuaZ · 2012-06-10T01:24:10.580Z · LW(p) · GW(p)
In this context rotations are rotations about some n-subspace by some angle rather than all oritentation preserving isometries.
Replies from: ciphergoth, army1987↑ comment by Paul Crowley (ciphergoth) · 2012-06-10T06:53:34.253Z · LW(p) · GW(p)
Wow, I didn't know that. It makes sense now I think about it though; SO(n) must be something like an n(n-1)/2 dimensional space, but the space of rotations about an (n-2)-subspace must be ... err ... something smaller - maybe 2n-3 dimensional? I may be abusing the idea of dimension here...
Replies from: gjm↑ comment by gjm · 2012-06-10T10:18:43.919Z · LW(p) · GW(p)
First of all, terminology. SO(n) is orientation-preserving orthogonal transformations on n-space, or equivalently the orientation-preserving symmetries of an (n-1)-sphere in n-space. So Joshua's statement is about SO(n) for n>3.
OK. So the obvious way to interpret "rotation about an axis" in many dimensions is: you choose a 2-dimensional subspace V, then represent an arbitrary vector as v+w with v in V and w in its orthogonal complement, and then you rotate v. The dimension of the set of these things is (n-1)+(n-2) from choosing V -- you can pick one unit vector to be in V, and then another unit vector orthogonal to it -- plus 1 from choosing how far to rotate. So, 2n-2.
And yes, the dimension of SO(n) is n(n-1)/2. One way to see this: you've got matrices with n^2 elements, and n(n+1)/2 constraints on those elements because all the pairwise inner products of the columns (including each column with itself) are specified.
These dimensions are all topological dimensions rather than vector-space dimensions, since the sets we're looking at aren't vector subspaces of R^(n^2), but there's nothing abusive about that :-).
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2012-06-10T10:52:19.915Z · LW(p) · GW(p)
It can't be 2n-2 because it's 3 when n=3. I get 2n-3 because the first vector is chosen with n-1 degrees of freedom, then the second with n-2, then subtract one because of the equivalence class of rotations, then add one for choosing how far to rotate.
EDIT: More generally, I think that the dimension of k-dimensional subspaces of an n-dimensional spaces is k(n-k), so where k=2 you get 2n-4, then add one for choosing how far to rotate. I'd feel better if I knew what I meant by "dimension" here though; it's not a vector space.
Replies from: witzvo, gjm, magfrump, army1987↑ comment by witzvo · 2012-06-10T17:39:50.153Z · LW(p) · GW(p)
These are the best references I know:
As for topological dimension, roughly, if you consider a neighborhood of a point in the space, what does space look like from there? Locally it's Euclidean if you're "on" a manifold. The rigorous definition involves charts. See also Lebesgue covering dimension.
↑ comment by gjm · 2012-06-10T19:59:43.365Z · LW(p) · GW(p)
Meh, you're right: the dimension of the space of 2-dimensional subspaces of n-space is 2n-4, not 2n-3. The reason why my handwavy dimension-counting above was wrong is ("of course") that I failed to "subtract one because of the equivalence class of rotations". And yes, you're right that in general it's k(n-k).
"Dimension" here means: locally the set looks like a that-many-dimensional vector space. That is, e.g., any element of SO(n) has a neighbourhood that's topologically the same as a neighbourhood in R^(n(n-1)/2).
↑ comment by A1987dM (army1987) · 2012-06-10T19:52:47.030Z · LW(p) · GW(p)
I'd feel better if I knew what I meant by "dimension" here though; it's not a vector space.
The number of parameters you need to label each element (provided the labelling is a continuous function, otherwise you can label points of R^2 with a single parameter e.g. (3.1415..., 2.7182...) -> 32.174118...)
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2012-06-10T19:59:41.092Z · LW(p) · GW(p)
To make this precise, you need the idea of "charts" and "atlases" that witzvo references.
↑ comment by A1987dM (army1987) · 2012-06-10T15:32:20.412Z · LW(p) · GW(p)
I don't recall encountering this usage before. Is it widespread?
Replies from: JoshuaZcomment by DataPacRat · 2012-06-11T01:07:18.989Z · LW(p) · GW(p)
Just figured something new out, based on my original post here.
The energy/time version of the uncertainty principle says that virtual particles of any given energy can spontaneously appear - but the bigger the energy, the shorter they last. This explains why the strength of electromagnetism falls off at a distance - virtual photons with high energies last for short times and thus travel short distances, while virtual photons with low energies can last for longer times and travel longer distances. All straight from the book.
But I just recalled that other forces, the strong and weak, are described as having a range limitation. I've always read about that range-limit existing - but since no reason was given for it, and I couldn't figure it out, I just shrugged my shoulders with an assumption of 'quantum weirdness'. But now I have an idea /why/ that range limit exists: with a minimum amount of energy in any given virtual particle for those forces, in the form of those particles' rest mass, the uncertainty principle thus also imposes a maximum lifespan, and thus a maximum range.
It's been such a long time since I've had a chance to figure out something about physics that I wasn't simply directly told, it's a surprisingly pleasant experience. :)
(Now, I'm wondering if this particular idea implies that since gravity's range is infinite, that implies that if gravity is transmitted by force-particles rather than space-curvature (assuming that that's a distinction with meaning), then the virtual gravity force-carrying particles have to be able to have arbitrarily small energies, and thus no significant rest mass...)
Replies from: Alejandro1↑ comment by Alejandro1 · 2012-06-11T03:17:05.466Z · LW(p) · GW(p)
Your insight about forces carried by massless vs. massive particles and their respective ranges is absolutely correct. Congratulations!
(Now, I'm wondering if this particular idea implies that since gravity's range is infinite, that implies that if gravity is transmitted by force-particles rather than space-curvature (assuming that that's a distinction with meaning), then the virtual gravity force-carrying particles have to be able to have arbitrarily small energies, and thus no significant rest mass...)
It is generally agreed that the still-to-be-constructed theory of quantum gravity will have gravitons, particles carrying the gravitational force analogous to photons for the EM field, and yes, gravitons should be massless as you argue. This is not however in conflict with the description of gravity as space-time geometry. Though the full details will have to wait till we understand quantum gravity completely, provisionally we can make unambiguous sense of gravitons at the pertrubative level: Think of a gravitational wave as a small ripple in spacetime, then one can quantize this perturbation and gravitons are to the wave as photons are to classical EM waves.
comment by witzvo · 2012-06-11T06:46:40.493Z · LW(p) · GW(p)
I just had a big "update".
EDIT: I'm a little less sure now. See the end.
I found something to teach programming on an immediate level to non-programmers without knowing they are programming, without any cruft. I always wished this was possible, but now I think we're really close.
If you want to get programming, and are a visual thinker, but never could get over some sort of inhibition, I think you should try this. You won't even know you're programming. It may not be "quite" programming, but it's closer than anything else I've seen at this level of simplicity. And anyway it's fun and pretty.
The important thing about this "programming" environment is that it is completely concrete. There are no formal "abstractions," and yet it's all about concrete representation of the idea formerly known as abstractions.
Enough words. Take a look: http://recursivedrawing.com/
[I was excited because to me this seems awfully close to the untyped lambda-calculus, made magically concrete. The "normal forms" are the "fixed points" are the fractals. It's all too much and requires more thought. It only makes pictures, though, for now. However, I can't see anything in it like "application" so... the issue of how close it is seems actually quite subtle. Somehow application's being bypassed in a static way. Curious. I'm sure there's a better way to see it I just haven't gotten yet.]
PS: Blue! Blue! Blue! (**)
** This is a joke that will only make sense if you've read The Name of the Wind: Rothfuss. If you prefer to spoil yourself, here, but buy the book afterward if you like it.
cross-posted here [I'm not sure about the etiquette, but I think this idea deserves not to be lost in an old thread.]
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2012-06-12T18:03:39.966Z · LW(p) · GW(p)
A flash game along the same lines: