Posts

Comments

Comment by Brandeis on The Lightcone Theorem: A Better Foundation For Natural Abstraction? · 2023-05-15T16:37:43.595Z · LW · GW

I think it might be useful to mention an analogy between your considerations and actual particle physics, where people are stuck with a functionally similar problem. They have tried (and so far failed) to make much progress, but perhaps you can find some inspiration from studying their attempts. 

The most immediate shortcoming of the Telephone Theorem and the resampling argument is that they talk about behavior in infinite limits. To use them, either we need to have an infinitely large graphical model, or we need to take an approximation.

In particle physics, there is a quantity called the Scattering matrix; loosely speaking, the S-matrix connects a number of asymptotically free "in" states  to a number of  asymptotically free "out" states, where "in" means the state is projected to the infinite past, and "out" means projected to the infinite future. For example, if I were trying to describe a 2->2 electron scattering process, I would take two electrons "in" the far past, two electrons "out" in the far future, and sandwich an S-matrix between the two states which contains a bunch of "interaction" information, in particular about the probability (we're considering quantum mechanical entities) of such a process happening. 

long-range interactions in a probabilistic graphical model (in the long-range limit) are mediated by quantities which are conserved (in the long-range limit).

The S-matrix can also be almost completely constrained by global symmetries (by Noether's theorem, these imply conserved quantities) using what's known as Bootstrapping. The entries of the S-matrix themselves are Lorentz invariant, so light-cone type causality is baked into the formalism.

In physics, it's perfectly fine to take these infinite limits if the background space-time has the appropriate asymptotic conditions i.e there exists a good definition of what constitutes the far past/future. This is great for particle physics experiments, where the scales are so small that the background spacetime is practically flat, and you can take these limits safely. The trouble is that when we scale up, we seem to live in an expanding universe (de-Sitter space) whose geometry doesn't support the taking of such limits. It's an open problem in physics to formulate something like an S-matrix on de Sitter space so that we can do particle physics on large scales.

People have tried all sorts of things (like what you have; splitting the universe up into a bunch of hypersurfaces X_i doing asymptotics there, and then somehow gluing),  but they run into many technical problems like the initial data hypersurface not being properly Cauchy and finite entropy problems and so on.