Integrating Hidden Variables Improves Approximation

post by johnswentworth · 2020-04-16T21:43:04.639Z · LW · GW · 4 comments

Fun fact: the KL divergence of distribution from distribution is convex in the pair . Writing it out: with .

This is particularly interesting if we take and to be two different models, and take the indices 1, 2 to be different values of another random variable with distribution given by . In that case, the above inequality becomes:

In English: the divergence between our models of the -distribution ignoring is at least as small as the average divergence between our models of the -distribution given . This is true regardless of what the two models are - any approximation of the observable distribution improves (or gets no worse) when we integrate out a hidden variable, compared to fixing the value of the hidden variable.

Of course, this doesn't say anything about how much the approximation improves. Presumably for bad approximations, the divergence will not converge to anywhere near zero as we integrate more and more hidden variables. And if the hidden variable doesn't actually interact with the observables significantly, then presumably the divergence decrease will be near-zero.

So when would we expect this to matter?

I'd expect it to matter mainly when the observable consists of multiple variables which are "far apart" in a large model - i.e. there are many hidden variables mediating the interactions between observables. In other words, I'd expect this phenomenon to mainly be relevant to information at a distance [LW · GW]. It's a hint that information at a distance, in complex systems, converges to some sort of universal behavior/properties, which is simpler in some sense [? · GW] than the full fine-grained model.

4 comments

Comments sorted by top scores.

comment by VojtaKovarik · 2020-08-09T11:55:49.945Z · LW(p) · GW(p)

I am usually reasonably good at translating from math to non-abstract intuitive examples...but I didn't have much success here. Do you have "in English, for simpletons" example to go with this? :-) (You know, something that uses apples and biscuits rather than English-but-abstract words like "there are many hidden variables mediating the interactions between observables" :D.)

Otherwise, my current abstract interpretation of this is something like: "There are detailed models, and those might vary a lot. And then there are very abstract models, which will be more similar to each other...well, except that they might also be totally useless." So I was hoping that a more specific example would clarify things for a bit and tell me whether there is more to this (and also whether I got it all wrong or not :-).)

Replies from: johnswentworth
comment by johnswentworth · 2020-08-09T16:48:58.350Z · LW(p) · GW(p)

I recommend skipping to the next post. This post was kind of a stub, the next one explains the same idea better.

comment by Clark Benham (clark-benham) · 2020-04-17T01:27:36.548Z · LW(p) · GW(p)

Doesn't

D_KL(P[X]||Q[X])≤E_Y[D_KL(P[X|Y]||Q[X|Y])]

mean that you can't expect to improve by integrating in additional information Y?

Replies from: johnswentworth
comment by johnswentworth · 2020-04-17T01:47:48.518Z · LW(p) · GW(p)

Bit of an accidental pun, here. "Integrating additional information" (in the usual sense of the phrase), has exactly the opposite meaning of "integrate out a variable" - when we integrate over the variable (in the mathy sense of the phrase), we're throwing out whatever information it contains.

So, yes - it does mean that we can't expect an approximation to improve when we integrate in additional information (in the layman's sense of the phrase).