Posts

Comments

Comment by SymplecticMan on Newton's law of cooling from first principles · 2024-01-18T01:05:26.282Z · LW · GW

Material properties such as thermal conductivity can depend on temperature. The actual calculation of thermal conductivity of various materials is very much outside of my area, but Schroeder's "An Introduction to Thermal Physics" has a somewhat similar derivation showing the thermal conductivity of an ideal gas being proportional to   based off the rms velocity and mean free path (which can be related to average time between collisions).

Comment by SymplecticMan on Newton's law of cooling from first principles · 2024-01-17T22:33:33.668Z · LW · GW

Ah, so I'm working at a level of generality that applies to all sorts of dynamical systems, including ones with no well-defined volume. As long as there's a conserved quantity , we can define the entropy  as the log of the number of states with that value of . This is a univariate function of , and temperature can be defined as the multiplicative inverse of the derivative .

You still in general to specify which macroscopic variables are being held fixed when taking partial derivatives. Taking a derivative with volume held constant is different from one with pressure held constant, etc. It's not a universal fact that all such derivatives give temperature. The fact that we're talking about a thermodynamic system with some macroscopic quantities requires us to specify this, and we have various types of energy functions, related by Legendre transformations, defined based off which conjugate pairs of thermodynamic quantities they are functions.

By

I mean

for some constant  that doesn't vary with time. So it's incompatible with Newton's law.

And I don't believe this proportionality holds, given what I demonstrated between the forms for what you get when applying this ansatz with  versus with . Can you demonstrate, for example, that the two different proportionalities you get between  and  are consistent in the case of an ideal gas law, given that the two should differ only by a constant independent of thermodynamic quantities in that case?

Oh, the asymmetric formula relies on the assumption I made that subsystem 2 is so much bigger than subsystem 1 that its temperature doesn't change appreciably during the cooling process. I wasn't clear about that, sorry.

Since it seems like the non-idealized symmetric form would multiply one term  by  and  the other term by , can you explain why the non-idealized version doesn't just reduce to something like Newton's law of cooling, then?

Here is some further discussion on issues with the  law.

For an ideal gas, the root mean square velocity  is proportional to . Scaling temperature up by a factor of 4 scales up all the velocities by a factor of 2, for example.  This applies not just to the rms velocity but to the entire velocity distribution. The punchline is, looking at a video of a hot ideal gas is not distinguishable from looking at a sped-up video of a cold ideal gas, keeping the volume fixed.

Continuing this scaling investigation, for a gas with collisions, slowing down the playback of a video  has the effect of increasing the time between collisions, and as discussed, slowing down the video should look like lowering the temperature. And given a hard sphere-like collision of two particles, scaling up the velocities of the particles involved also scales up the energy exchanged in the collision. So, just from kinetic theory, we see that the rate of heat transfer between two gases must increase if the temperature of both gases were increased by the same proportionality. This is what Newton's law of cooling says, and it is the opposite of what your proposed law says.

Here is a further oddity: your law predicts that an infinitely hot heat bath has a bounded rate of heat exchange with any system with a finite, non-zero temperature, which similar to the above, doesn't agree with how would understand it from the kinetic theory of gases.

Comment by SymplecticMan on Newton's law of cooling from first principles · 2024-01-17T17:44:47.064Z · LW · GW

I'm going to open up with a technical point: it is important, not only in general but particularly in thermodynamics, to specify what quantities are being held fixed when taking partial derivatives. For example, you use this relation early on:

.

This is a relationship at constant volume. Specifically, the somewhat standard notation would be

,

where U is the internal energy.  The change in internal energy at constant volume is equal to the heat transfer, so it reduces to the relationship you used.

That brings us to the lemma you wanted to use:

.

To get what you wanted, it has to actually be the derivative with constant volume on the right, but then there's a problem: it doesn't succeed in giving you the time derivative of V since 

Let's assume that problem with the lemma can somehow be fixed, though, for sake of discussion. There's another issue, which is that if the proportionality depends on thermodynamic variables, then you can have basically any relationship. For example, your heat equation:

.

If these proportionalities were  and , it would actually give Newton's law of cooling. For an ideal gas, the equation of state  means that the change in internal energy (which is just the heat transfer at constant volume) ought to be directly proportional to the temperature change, with no dependence on other thermodynamic variables (besides ) in the proportionality.

Now we have your formula for the derivative of temperature:

.

As a side note, I'm not sure how this is a heat capacity; it doesn't match any of the heat capacity formulas I remember. But the appearance of  is notable; it makes it look a lot closer to Newton's law of cooling, and comparing it to the earlier equation for heat shows how the proposed proportionalities from the first lemma contain dependence on other thermodynamic variables. But you changed from   and   to  and  before this, so it's worth remembering that there should be a symmetric relationship between the two subsystems. Multiplying both of the inverse temperature terms by a single temperature produces an asymmetry in the time derivatives for the two subsystems.

This asymmetry in the temperature dependence would predict that one subsystem will heat faster than the other subsystem cools, which would tend to violate energy conservation. If we just imagine an ideal gas in two separate with an identical number of particles in each container, any temperature increase in one gas has to be exactly compensated by an identical magnitude temperature decrease in the other gas, since the internal energy is just proportional to temperature.

So I argue that this proposed law does not hold up.

Comment by SymplecticMan on The Born Rule is Time-Symmetric · 2020-11-12T19:17:28.785Z · LW · GW

Note, though, that time reversal is still an anti-unitary operator in quantum mechanics in spite of the hand-waving argument failing when time reversal isn't a good symmetry. Even when time reversal symmetry fails, though, there's still CPT symmetry (and CPT is also anti-unitary).

Comment by SymplecticMan on Ethics in Many Worlds · 2020-11-07T19:49:00.137Z · LW · GW

I argue that counting branches is not well-behaved with the Hilbert space structure and unitary time evolution, and instead assigning a measure to branches (the 'dilution' argument) is the proper way to handle this. (See Wallace's decision-theory 'proof' of the Born rule for more).

The quantum state is a vector in a Hilbert space. Hilbert spaces have an inner product structure. That inner product structure is important for a lot of derivations/proofs of the Born rule, but in particular the inner product induces a norm. Norms let us do a lot of things. One of the more important things is we can define continuous functions. The short version is, for a continuous function, arbitrarily small changes to the input should produce arbitrarily small changes to the output. Another thing commonly used for vector spaces is linear operators, which are a kind of function that maps vectors to other vectors in a way that respects scalar multiplication and vector addition. We can combine the notion of continuous functions with linear operators and we get bounded linear operators.

While quantum mechanics contains a lot of unbounded operators representing observables (position, momentum, energy, etc.), bounded operators are still important. In particular, projection operators are bounded, and every self-adjoint operator, whether bounded or unbounded, has projection-valued measures. Projection-valued measures go hand-in-hand with the Born rule, and they are used to give the probability of a measurement falling on some set of values. There's an analogy with probability distributions. Sampling from an arbitrary distribution can in principle give an arbitrarily large number, and many distributions even lack a finite average. However, the probability of a sample from an arbitrary distribution falling in the interval [a,b] will always be a number between 0 and 1.

If we are careful to ask only about probabilities instead of averages, or even just to only ask about averages when the quantity is bounded, we can do practically everything in quantum mechanics with bounded linear operators. The expectation values of bounded linear operators are continuous functions of the quantum state. And so now we get to the core issue: arbitrarily small changes to the quantum state produce arbitrarily small changes to the expectation value of any bounded operator, and in particular to any Born rule probability.

So what about branch counting? Let's assume for sake of discussion that we have a preferred basis for counting in, which is its own can of worms. For a toy model, if we have a vector like (1, 0, 0, 0, 0, 0, ....) that we count as having 1 branch and a vector like (1, x, x, x, 0, 0, ....) that we're going to count as 4 branches if x is an arbitrarily small but nonzero number, this branch counting is not a continuous function of the state. If you don't know the state with infinite precision, you can't distinguish whether a coefficient is actually zero or just some really small positive number. Thus, you can't actually practically count the branches: there might be 1, there might be 4, there might be an infinite number of branches. On the other hand, the Born rule measure changes continuously with any small change to the state, so knowing the state with finite precision also gives finite precision on any Born rule measure.

In short, arbitrarily small changes to the quantum state can result in arbitrarily large changes to branch counting.

Comment by SymplecticMan on Multiple Worlds, One Universal Wave Function · 2020-11-06T22:28:23.307Z · LW · GW

I will amend my statement to be more precise:

Everett's proof that the Born rule measure (amplitude squared for orthogonal states) is the only measure that satisfies the desired properties has no dependence on tensor product structure.

Everett's proof that a "typical" observer sees measurements that agree with the Born rule in the long term uses the tensor product structure and the result of the previous proof. 

Comment by SymplecticMan on Multiple Worlds, One Universal Wave Function · 2020-11-06T16:56:30.165Z · LW · GW

I kind of get why Hermitian operators here makes sense, but then we apply the measurement and the system collapses to one of its eigenfunctions. Why?

If I understand what you mean, this is a consequence of what we defined as a measurement (or what's sometimes called a pre-measurement). Taking the tensor product structure and density matrix formalism as a given, if the interesting subsystem starts in a pure state, the unitary measurement structure implies that the reduced state of the interesting subsystem will generally be a mixed state after measurement. You might find parts of this review informative; it covers pre-measurements and also weak measurements, and in particular talks about how to actually implement measurements with an interaction Hamiltonian.

Comment by SymplecticMan on Multiple Worlds, One Universal Wave Function · 2020-11-06T15:47:53.755Z · LW · GW

I don't see how that relates to what I said. I was addressing why an amplitude-only measure that respects unitarity and is additive over branches has to use amplitudes for a mutually orthogonal set of states to make sense. Nothing in Everett's proof of the Born rule relies on a tensor product structure.

Comment by SymplecticMan on Multiple Worlds, One Universal Wave Function · 2020-11-06T04:47:18.237Z · LW · GW

Why should (2,1) split into one branch of (2,0) and one branch of (0,1), not into one branch of (1,0) and one branch of (1,1)? 

Again, it's because of unitarity.

As Everett argues, we need to work with normalized states to unambiguously define the coefficients, so let's define normalized vectors v1=(1,0) and v2=(1,1)/sqrt(2). (1,0) has an amplitude of 1, (1,1) has an amplitude of sqrt(2), and (2,1) has an amplitude of sqrt(5). 

(2,1) = v1 + sqrt(2) v2, so we need M[sqrt(5)] = M[1] + M[sqrt(2)] for the additivity of measures. Now let's do a unitary transformation on (2,1) to get (1,2) = -1 v1 + 2 sqrt(2) v2 which still has an amplitude of sqrt(5). So now we need M[sqrt(5)] = M[2 sqrt(2)] + M[-1] = M[2 sqrt(2)] + M[1]. This can only work if M[2 sqrt(2)] = M[sqrt(2)]. If one wanted a strictly monotonic dependence on amplitude, that'd be the end. We can keep going instead and look at the vector (a+1, a) = v1 + a sqrt(2) v2, rotate it to (a, a+1) = -v1 + (a+1) sqrt(2) v2, and prove that M[(a+1) sqrt(2)] = M[a sqrt(2)] for all a. Continuing similarly, we're led inevitably to M[x] = 0 for any x. If we want a non-trivial measure with these properties, we have to look at orthogonal states.

Comment by SymplecticMan on Multiple Worlds, One Universal Wave Function · 2020-11-06T03:17:35.877Z · LW · GW

I guess I don't understand the question. If we accept that mutually exclusive states are represented by orthogonal vectors, and we want to distinguish mutually exclusive states of some interesting subsystem, then what's unreasonable with defining a "measurement" as something that correlates our apparatus with the orthogonal states of the interesting subsystem, or at least as an ideal form of a measurement?

Comment by SymplecticMan on Multiple Worlds, One Universal Wave Function · 2020-11-05T21:37:13.008Z · LW · GW

I don't know if it would make things clearer, but questions about why eigenvectors of Hermitian operators are important can basically be recast as one question of why orthogonal states correspond to mutually exclusive 'outcomes'. From that starting point, projection-valued measures let you associate real numbers to various orthogonal outcomes, and that's how you make the operator with the corresponding eigenvectors.

As for why orthogonal states are important in the first place, the natural thing to point to is the unitary dynamics (though there are also various more sophisticated arguments).

Comment by SymplecticMan on Multiple Worlds, One Universal Wave Function · 2020-11-05T21:02:06.265Z · LW · GW

Everett argued in his thesis that the unitary dynamics motivated this:

...we demand that the measure assigned to a trajectory at one time shall equal the sum of the measures of its separate branches at a later time.

He made the analogy with Liouville's theorem in classical dynamics, where symplectic dynamics motivated the Lebesgue measure on phase space.

Comment by SymplecticMan on The Born Rule is Time-Symmetric · 2020-11-02T08:14:38.364Z · LW · GW

The earlier post has problems of its own: it works with an action with nonstandard units (in particular, mass is missing), its sign is backwards from the typical definition, and it doesn't address how vector potentials should be treated. The Lagrangian doesn't have to be positive, so interpreting it as any sort of temporal velocity will already be troublesome, but the Lagrangian is also not unique. It simply does not make sense in general to interpret a Lagrangian as a temporal velocity, so importing that notion into field theory also does not make sense.

The problem with all these entropic arrows of time is that a time reversible random walk tends to increase entropy both forward and backward in time. Without touching on time reversibility, fluctuation theorems, Liouville's theorem in classical mechanics and unitarity in quantum mechanics, fine-grained vs coarse-grained entropy, etc, I don't think this makes sense as an explanation of the arrow of time. As a physicist, this doesn't come across as a coherent description.

Comment by SymplecticMan on The Born Rule is Time-Symmetric · 2020-11-02T07:02:11.190Z · LW · GW

You say this theory of your predicts "that localized quantum fields will maximize proper time". It's not clear how it's a prediction of this theory, since the statement of the prediction is the first time you mention proper time in this article. I looked through the follow-up link to see if the prediction came from there, but the link doesn't mention entropy (nor does this post mention action). And I don't believe that "The physics establishment sidesteps this quandary by defining time to progress in the direction of increasing entropy" is a fair assessment of how most physicists think about time and entropy. If you want to talk about the physics establishment and entropy, it would be helpful to address the work on fluctuation theorems.

I also have problems with the description of Lagrangian mechanics in the link about proper time. The description of the  in Noether's theorem as being "oriented along the direction of the particle's path" isn't accurate; it's oriented in whatever direction corresponds to the symmetry in question. The statement that Lagrangian density should be considered as a "temporal velocity density" doesn't have any justification, and seems difficult to mesh with the Lorentz invariance of the Lagrangian density. The substitution made for the field theory version of the Euler-Lagrange equation in terms of the d'Alembertian of the Lagrangian density is also incorrect.

Comment by SymplecticMan on The Born Rule is Time-Symmetric · 2020-11-02T06:21:34.874Z · LW · GW

Demanding that the time reversal operator leaves Q unchanged but reverses the sign of P (which is how time reversal in classical mechanics works) means that the time reversal operator has to be implemented by an anti-unitary operator. More hand-wavingly, since the Schrodinger equation gives   as the forward time evolution of a state  (flipping the sign of time) should give the backward time evolution. But that's just the normal time evolution of  as you can see if you just conjugate the Schrodinger equation.

See also this paper for more discussion on why the time reversal operator ought to behave in this way and on time reveral in general.

Comment by SymplecticMan on Uncalibrated quantum experiments act clasically · 2020-07-22T07:11:57.521Z · LW · GW
That's a good point; is a strong precise notation of "mutually exclusive" in quantum mechanics. (...)

I'd be remiss at this point not to mention Gleason's theorem: once you accept that notion of mutually exclusive events, the Born rule comes (almost) automatically. There's a relatively large camp that accepts Gleason's theorem as a good proof of why the Born rule must be the correct rule, but there's of course another camp that's looking for more solid proofs. Just on a personal note, I really like this paper, but I haven't seen much discussion about it anywhere.

But that's kind of vague, and my whole introduction was sloppy. I added it after the fact; maybe should have stuck with just the "three experiments".

The general idea of adding terms without interference effects when you average over phases is solid. I will have to think about it more in the context of alternative probability rules; I've never thought about any relation before.

From the wiki page, it sounds like a density matrix is a way of describing a probability distribution over wavefunctions. Which is what I've spent some time thinking about (though in this post I only wrote about probability distributions over a single amplitude). Except it isn't so simple: many distributions are indistinguishable, so the density matrix can be vastly smaller than a probability distribution over all relevant wavefunctions.
And some distributions ("ensembles") that sound different but are indistinguishable:
(...)
This is really interesting. It's satisfying to see things I was confusedly wondering about answered formally by von-Neumann almost 100 years ago.

Yeah, some of these sorts of things that are really important for getting a good grasp of the general situation don't often get any attention in undergraduate classes. Intro quantum classes often tend to be crunched for time between teaching the required linear algebra, solving the simple, analytically tractable problems, and getting to the stuff that has utility in physics. I happened to get exposed to density matrices relatively early as an undergraduate, but I think there's probably a good number of students who didn't see it until graduate school.

Roughly speaking, there's two big uses for density matrices. One, as you say, is the idea of probability distributions over wavefunctions (or 'pure states') in the minimal way. But the other, arguably more important one, is simply being able to describe subsystems. Only in extraordinary cases (non-entangled systems) is a subsystem of some larger system going to be in a pure state. Important things like the no-communication theorem are naturally expressed in terms of density matrices.

Von Neumann invented/discovered such a huge portion of the relevant linear algebra behind quantum mechanics that it's kind of ridiculous.

Comment by SymplecticMan on Uncalibrated quantum experiments act clasically · 2020-07-22T03:33:47.550Z · LW · GW

I haven't closely read the details on the hypothetical experiments yet, but I want to comment on the technical details of the quantum mechanics at the beginning.

In quantum mechanics, probabilities of mutually exclusive events still add: . However, things like "particle goes through slit 1 then hits spot x on screen" and "particle goes through slit 2 then hits spot x on screen" aren't such mutually exclusive events.

This may seem like I'm nit-picking, but I'd like to make the point by example. Let's say we have a state where . If we simply add the complex amplitudes to try to calculate , we get 0; in actuality, we should get as we expect from classical logic.

Here's where I bad-mouth the common way of writing the Born rule in intro quantum material as and the way I'd been using it. By writing the state as and the event as we've made it look like they're both naturally represented as vectors in a Hilbert space. But the natural form of a state is as a density matrix, and the natural form of an event is as an orthogonal projection; I want to focus on events and projections. For mutually exclusive events and with projections and , the event has the corresponding projection .

So where's the adding of amplitudes? Let's pretend I didn't just say states are naturally density matrices and let's take the same state from above and an arbitrary projection corresponding to some event. The Born rule takes the following form:

This is notably not just an contribution plus a contribution; the other terms are the interference terms. Skipping over what a density matrix is, let's say we have a density matrix . The Born rule for density matrices is

Now this one is just a sum of two contributions, with no interference.

This ended up longer and more rambling than I'd originally intended. But I think there's a lot to the finer details of how probabilities and amplitudes behave that are worth emphasizing.