[Link] Quantum theory as the most robust description of reproducible experiments
post by gRR · 2014-05-08T11:18:52.888Z · LW · GW · Legacy · 12 commentsContents
Abstract None 12 comments
The paper: http://www.sciencedirect.com/science/article/pii/S000349161400102X
Authors: Hans De Raedt, Mikhail I. Katsnelson, Kristel Michielsen
Abstract
It is shown that the basic equations of quantum theory can be obtained from a straightforward application of logical inference to experiments for which there is uncertainty about individual events and for which the frequencies of the observed events are robust with respect to small changes in the conditions under which the experiments are carried out.
12 comments
Comments sorted by top scores.
comment by Mitchell_Porter · 2014-05-08T14:33:26.418Z · LW(p) · GW(p)
I look at the abstracts of new papers on the quant-ph archive every day. This is a type of paper which, based on the abstract, I would almost certainly not bother to look at. Namely, it proposes to explain where quantum theory comes from, in terms which obviously seem like they will not be enough. I read the promise in the title and abstract and think, "Where is the uncertainty principle going to come from - the minimum combined uncertainty for complementary observables? How will the use of complex numbers arise?"
I did scroll through the paper and notice lots of rigorous-looking probability formalism. I was particularly waiting to see how complex numbers entered the picture. They show up a little after equation 47, when two real-valued functions are combined into one complex-valued function... I also noticed that the authors were talking about "Fisher information". This was unsurprising, there are other people who want to "derive physics from Fisher information", so clearly this paper is part of that dubious trend.
At a guess - without having worked through the paper - I would say that the authors' main sin will turn out to be, that they do not do anything at all like deriving quantum theory - that instead their framework is something much, much looser and less specific - but then they give their article a title implying that they can derive the whole of QM from their loose framework. Not only do they thereby falsely create the impression that they have answered a basic question about reality, but their fake answer is a bland one, thereby dulling further interest, and it is presented with an appearance of rigor, making it look authoritative. I would also expect that, when they get to the stage of trying to derive actual QM, they have to compound their major sin with the minor one of handwaving in support of a preordained conclusion - that they will have to do something like join their two real-valued functions together, in a way which is really motivated only by their knowing what QM looks like, but for which they will have to invent some independent excuse, since they are supposedly deriving QM.
All the foregoing may be regarded as a type of prediction. They are the dodgy misrepresentations I would expect to find happening in the paper, if I actually sat down and scrutinized it in detail. I really don't want to do that since time is precious, but I also didn't want to let this post go unremarked. Is it too much to hope that some coalition of Less Wrong readers, knowing about both probability and physics, will have the time and the will to look more closely, and identify specific leaps of logic, and just what is actually going on in the paper? It may also be worth looking for existing criticisms of the "physics from Fisher information" school of thought - maybe someone out there has already written the ideal explanation of its shortcomings.
Replies from: n4r9, gRR↑ comment by n4r9 · 2014-05-11T10:17:18.577Z · LW(p) · GW(p)
I wonder if you would apply the same criticism to so-called "derivations" of quantum theory from information theoretic principles, specifically those which work within the environment of general probabilistic theories. For example:
http://arxiv.org/abs/1011.6451 ; http://arxiv.org/abs/1004.1483 ; http://arxiv.org/abs/quantph/0101012
The above links, despite having perhaps overly strong titles, are fairly clear about what assumptions are made, and what is derived. These assumptions are more than simply uncertainty and robust reproducibility: e.g. one assumption that is made by all the above links is that any two pure states are linked by a reversible transformation (in the first link, a slightly modified version of this is assumed). Of course, "pure state" and "reversible transformation" are well-defined concepts within the general probabilistic framework which generalize the meaning of the terms in quantum theory.
Since this research is closely related to my PhD, I feel compelled to give an answer your questions about uncertainty relations and complex numbers in this context. General probabilistic theories provide an abstracted formalism for discussing experiments in terms of measurement choices and outcomes. Essentially any physical theory that predicts probabilities for experimental outcomes (a "prediction calculus" if you like) occupies a place within that formalism, including the complex Hilbert space paradigm of quantum theory. The idea is to whittle down, by means of minimal reasonabe assumptions, the full class of general probabilistic theories until one ends up with the theory that corresponds to quantum theory. What you then have is a prediction calculus equivalent to that of complex Hilbert space quantum theory. In short, complex numbers aren't directly derived from the assumptions; rather, they can be seen simply as part of a less intuitive representation of the same prediction calculus. Uncertainty relations can of course be deduced from the general probabilistic theory if desired, but since they are not part of the actual postulates of quantum theory, there hasn't been much point in doing so. It bears mentioning that this "whittling down process" has so far been achieved only for finite-dimensional quantum theory, as far as I'm aware, although there is work being done on the infinite-dimensional case.
Replies from: Mitchell_Porter, V_V↑ comment by Mitchell_Porter · 2014-05-20T08:51:42.113Z · LW(p) · GW(p)
I have no problem with alternative derivations of quantum theory - if they are correct! But the framework in this paper is too weak to qualify. Look at their definition of 'category 3a' models. They are sort of suggesting that quantum mechanics is the appropriate prediction calculus or framework for reasoning, for anything matching that description.
But in fact category 3a also includes scenarios which are completely classical. At best, they have defined a class of prediction calculi which includes quantum mechanics as a special case, but then go on to claim that this definition is the whole story about QM.
↑ comment by V_V · 2014-05-11T14:19:32.053Z · LW(p) · GW(p)
There is nothing special about complex numbers in quantum mechanics. You can get rid of them by adding an extra dimension to the Hilbert space.
Replies from: jsteinhardt↑ comment by jsteinhardt · 2014-05-14T04:18:14.882Z · LW(p) · GW(p)
That doesn't seem true without causing SU(n) to lose its privileged status as the transformation group on quantum states.
Replies from: V_V↑ comment by gRR · 2014-05-08T21:16:30.796Z · LW(p) · GW(p)
Well, I liked the paper, but I'm not knowledgeable enough to judge its true merits. It deals heavily with Bayesian-related questions, somewhat in Jayne's style, so I thought it could be relevant to this forum.
At least one of the authors is a well-known theoretical physicist with an awe-inspiring Hirsch factor, so presumably the paper would not be trivially worthless. I think it merits a more careful read.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2014-05-09T10:26:58.961Z · LW(p) · GW(p)
Someone can build a career on successfully and ingeniously applying QM, and still have personal views about why QM works, that are wrong or naive.
Rather than just be annoyed with the paper, I want to identify its governing ideas. Basically, this is a research program which aims to show that quantum mechanics doesn't imply anything strikingly new or strange about reality. The core claim is that quantum mechanics is the natural formalism for describing any phenomenon which exhibits uncertainty but which is still robustly reproducible.
In slightly more detail: First, there is no attempt to figure out hidden physical realities. The claim is that in any possible world where certain experimental results occur, QM will provide an apt and optimal description of events, regardless of what the real causes are. Second, there is a determination to show that QM is somehow straightforward or even banal: 'quantum theory is a “common sense” description of the vast class of experiments that belongs to category 3a.' Third, the authors are inspired by Jaynes's attempt to obtain QM from Bayes, and Frieden's attempt to get physics from Fisher information, which they think they can justify for experiments that are "robustly" reproducible.
Having set out this agenda, what evidence do the authors provide? First, they describe something vaguely like an EPR experiment, make various assumptions about how the outputs behave, and then show that these assumptions imply correlations like those produced when a particular entangled state is used as input in a real EPR experiment. They also add that with different starting assumptions, they can obtain outputs like those of a different entangled state.
Then, they have a similarly abstracted description of a Stern-Gerlach experiment, and here they claim that they get the Born rule as a result of their assumptions. Finally, they consider a moving particle under repeated observation, and say that they can get the Schrodinger equation by assuming that the outcomes resemble Newtonian mechanics on average.
Their choice of case studies, and the assumptions they allow themselves to use, both seem rather haphazard to me. They make many appeals to symmetry, e.g. one of the assumptions in their EPR case study is that the experiment will behave the same regardless of orientation. Or in deriving the Schrodinger equation, they assume translational invariance. These are standard hypotheses in the ordinary approach to physics too, so it's not surprising that they should yield something like ordinary physics here, too... On the other hand, they only derive the Born rule in the special case of Stern-Gerlach, so they have probably done something tricky there.
In general, it seems that they decided in advance that QM would be derived from the assumption of uncertain but reproducible phenomena, and the application of Bayes-like reasoning, and nothing else... but then for each of their various case studies, they then did allow the use of whatever extra assumptions were necessary, to arrive at the desired conclusion.
So I do not regard the paper's philosophy as having merit. But the real demonstration of this would require engaging with each of their case studies in turn, and showing that special extra assumptions were indeed used. It would also be useful to criticize their definition of 'category 3a' experiments, by showing that there are experiments in that category which manifestly do not exhibit quantum-like behavior... I suspect that the properly corrected version of their paper would be something like "Quantum theory as the most robust description of reproducible experiments that behave like quantum theory".
comment by dvasya · 2014-05-15T20:38:37.853Z · LW(p) · GW(p)
Here's a condensed summary of the paper's main points:
http://lesswrong.com/r/discussion/lw/k88/common_sense_quantum_mechanics/
comment by Shmi (shminux) · 2014-05-08T15:25:30.712Z · LW(p) · GW(p)
There have been tons of papers deriving or purporting to derive QM from this or that. I'm yet to see one which used its supposedly better/deeper foundations to answer experimentally testable questions our garden-variety QM cannot. Until then the value of yet-another such derivation, whether correct or incorrect, is too low to bother paying attention to.