Posts
Comments
Contrast this to the notion we have in probability theory, of an exact quantitative rational judgment. If 1% of women presenting for a routine screening have breast cancer, and 80% of women with breast cancer get positive mammographies, and 10% of women without breast cancer get false positives, what is the probability that a routinely screened woman with a positive mammography has breast cancer? 7.5%. You cannot say, "I believe she doesn't have breast cancer, because the experiment isn't definite enough." You cannot say, "I believe she has breast cancer, because it is wise to be pessimistic and that is what the only experiment so far seems to indicate." 7.5% is the rational estimate given this evidence, not 7.4% or 7.6%. The laws of probability are laws.
I'm having trouble understanding what is meant by this essay. Where do you think these probabilities in your likelihood estimation problem came from in the first place? What on Earth does it mean to be precisely certain of a probability?
In my mind, these are summaries of prior experience - things that came out of the dynamic state machine we know of as the world. Let's start at the beginning: You have a certain incidence of cancer in the population. What is it? You don't know yet, you have to test - this is a science problem. Hey, what is cancer in the first place? Science problem. You have this test - how are you going to figure out what the probabilities of false positives and false negatives are? You have to try the test, or you have no information on how effective it is, or what to plug into the squares in your probability problem.
Bayesian reasoning can be made to converge on something like a reasonable distribution of confidence if you keep feeding it information. Where does the information you feed it come from? If not experience, the process is eating it's own tail, and cannot be giving you information about the world! Prior probabilities, absent any experience of the problem domain, are arbitrary, are they not?
Also, for more complicated problems such as following a distribution around in dynamic system: You also have to have a model of what the system is doing - that is also an assumption, not a certainty! If your theory about the world is wrong or inapplicable, your probability distribution is going to propagate from it's initial value to a final value, and that final value will not accord with the statistics of the data coming from the external world. You'll have to try different models, until you find a better one that starts spitting out the right output statistics for the right input statistics. You have no way of knowing that a model is right or wrong a-priori. Following bayesian statistics around in a deterministic state machine is a straightforward generalization of following single states around in a deterministic state machine but your idea of the dynamics is distinct (usually far simpler for one thing) from what the world is actually doing.
My point isn't that it is unreasonable to use symmetric (/antisymmetric) wavefunctions until we discover something that requires us to use a more complicated model. My objection is to an error in thinking that holds that such potential future discoveries are a-priori impossible. I'm with philosopher Bob on this one.
That is a great explanation. Thanks
I think quantum physicists here are making the same mistake that lead to the Gibbs paradox in classical phyiscs. Of course, my textbook in classical thermodynamics tried to sweep the Gibbs paradox under the quantum rug, and completely missed the point of what it was telling us about the subjective nature of classical entropy. Quantum physics is another deterministic reversible state-machine, so I don't see why it is different in principle from a "classical world".
While it is true that a wavefunction or something very much like it must be what the universe is using to do it's thing (is the territory), it isn't necessarily true that our wavefunction (the one in our heads that we are using to explain the set of measurements which we can make) contains the same information. It could be a projection of some sort, limited by what our devices can distinguish. This is a not-in-principle-complete map of the territory.
PS – not that I’m holding my breath that we’ll invent a device that can distingish between “electron isotopes” or other particles (their properties are very regular so far), but it’s important to understand what is in principle possible so your mind doesn’t break if we someday end up doing just that.
I have a counter-hypothesis: If the universe did distinguish between photons, but we didn't have any tests which could distinguish between photons, what this physically means is that our measuring devices, in their quantum-to-classical transitions (yes, I know this is a perception thing in MWI), are what is adding the amplitudes before taking the squared modulus. Our measurers can't distinguish, which is why we can get away with representing the hidden "true wavefunction" (or object carrying similar information) with a symmetric wavefunction. If we invented a measurement device which was capable of distinguishing photons, this would mean that photon A and photon B striking it would dump amplitude into distinct states in the device rather than the same state, and we would no longer be able to represent the photon field with a symmetric wavefunction if we wanted to make predictions.
(I don't claim to be using my notes to any great effect, but this is what I do with them):
To me, I've noticed that I seldom actually use my notes as a reference. When I need to refer to something, I go to a place in a book somewhere. Rather, during a lecture, my notebook for the class seems to function more as a way to keep me paying attention to the lecturer, and to run various complicated pieces of information (equations, etc) across my mind. (Okay, I do sort of refer to these during exam study, but the books tend to be more legible).
I also do a lot of my own investigating of various subjects. I will be reading a book, and noting the equations, then go off on a tangent playing with the equations, or attempt to re-derive something that I may or may not have played with before. I have several 5-subject spiral bound math notebooks that I will fill with whatever ideas I am currently playing with. I try to expend one of these every 3 months or so, though my current one is 5 months old. :'/
When I am done, I clip the spiral binding and roll it out of the notebook, then use my document scanner to scan the thing and put it in my notebook library for future reference. (Some of it I do end up looking back to, but hardly most of it,.)
I've never understood why explaining the Born Rule is less of a problem for any of the other interpretations of QP than it is for MWI. Copenhagen, IIRC, simply asserts it as an axiom. (Rather, it seems to me that MWI is one of the few that even tries to explain it!)
The problem that I've always had with the "utility monster" idea is that it's a misuse of what information utility functions actually encode.
In game theory or economics, a utility function is a rank ordering of preferred states over less preferred states for a single agent (who presumably has some input he can adjust to solve for his preferred states). That's it. There are no "global" utility functions or "collective" utility measures that don't run into problems when individual goals conflict.
Given that an agent's utility function only encodes preferences, turning up the gain on it really really high (meaning agent A really reaaaally cares about all of his preferences) doesn't mean that agents B,C,D, etc should take A's preferences any more or less seriously. Multiplying it by a large number is like multiplying a probability distribution or an eigenvector by a really large number - the relative frequencies, pointing direction are exactly the same.
Before some large number of people should sacrifice their previous interests on the altar of Carethulu, there should be some new reason why these others (not Carethulu) should want to do so (implying a different utility function for them).
Hmm. In a certain sense, is these sufficient conditions to actually define an organization with boundaries?
I don't think many of us have ever seen the outside of that university. :-P