A Robust Natural Latent Over A Mixed Distribution Is Natural Over The Distributions Which Were Mixed

post by johnswentworth, David Lorell · 2024-08-22T19:19:28.940Z · LW · GW · 4 comments

Contents

  Trivial Version
  Nontrivial Version
    Redundancy
    Mediation
None
4 comments

This post walks through the math for a theorem. It’s intended to be a reference post, which we’ll link back to as-needed from future posts. The question which first motivated this theorem for us was: “Redness of a marker seems like maybe a natural latent over a bunch of parts of the marker, and redness of a car seems like maybe a natural latent over a bunch of parts of the car, but what makes redness of the marker ‘the same as’ redness of the car? How are they both instances of one natural thing, i.e. redness? (or ‘color’?)”. But we’re not going to explain in this post how the math might connect to that use-case; this post is just the math.

Suppose we have multiple distributions  over the same random variables . (Speaking somewhat more precisely: the distributions are over the same set, and an element of that set is represented by values .) We take a mixture of the distributions: , where  and  is nonnegative. Then our theorem says: if an approximate natural latent exists over , and that latent is robustly natural under changing the mixture weights , then the same latent is approximately natural over  for all .

Mathematically: the natural latent over  is defined by , and naturality means that the distribution  satisfies the naturality conditions [LW · GW] (mediation and redundancy).The theorem says that, if the joint distribution  satisfies the naturality conditions robustly with respect to changes in , then  satisfies the naturality conditions for all . “Robustness” here can be interpreted in multiple ways - we’ll cover two here, one for which the theorem is trivial and another more substantive, but we expect there are probably more notions of “robustness” which also make the theorem work.

Trivial Version

First notion of robustness: the joint distribution  satisfies the naturality conditions to within  for all values of  (subject to  and  nonnegative).

Then: the joint distribution  satisfies the naturality conditions to within  specifically for , i.e.  which is 0 in all entries except a 1 in entry . In that case, the joint distribution is , therefore  is natural over . Invoke for each k, and the theorem is proven.

... but that's just abusing an overly-strong notion of robustness. Let's do a more interesting one.

Nontrivial Version

Second notion of robustness:  the joint distribution  satisfies the naturality conditions to within , and the gradient of the approximation error with respect to (allowed) changes in  is (locally) zero.

We need to prove that the joint distributions  satisfy both the mediation and redundancy conditions for each . We’ll start with redundancy, because it’s simpler.

Redundancy

We can express the approximation error of the redundancy condition with respect to  under the mixed distribution as

where, recall, .

We can rewrite that approximation error as:

Note that  is the same under all the distributions (by definition), so:

and by factorization transfer [LW · GW]:

In other words: if  is the redundancy error with respect to  under distribution , and  is the redundancy error with respect to  under the mixed distribution , then

The redundancy error of the mixed distribution is at least the weighted average of the redundancy errors of the individual distributions.

Since the  terms are nonnegative, that also means

which bounds the approximation error for the  redundancy condition under distribution . Also note that, insofar as the latent is natural across multiple  values, we can use the  value with largest  to get the best bound for .

Mediation

Mediation relies more heavily on the robustness of naturality to changes in . The gradient of the mediation approximation error with respect to  is:

(Note: it’s a nontrivial but handy fact that, in general, the change in approximation error of a distribution  over some DAG  under a change  is .)

Note that this gradient must be zero along allowed changes in , which means the changes must respect . That means the gradient must be constant across indices :

To find that constant, we can take a sum weighted by  on both sides:

So, robustness tells us that the approximation error under the mixed distribution can be written as

for any .

Next, we’ll write out  as a mixture weighted by , and use Jensen’s inequality on that mixture and the logarithm:

Then factorization transfer gives:

Much like redundancy, if  is the mediation error with respect to  under distribution  (note that we’re overloading notation,  is no longer the redundancy error), and  is the mediation error with respect to  under the mixed distribution , then the above says

Since the  terms are nonnegative, that also means

which bounds the approximation error for the  mediation condition under distribution .

4 comments

Comments sorted by top scores.

comment by faul_sname · 2024-08-22T23:31:10.738Z · LW(p) · GW(p)

Alright, I'm terrible at abstract thinking, so I went through the post and came up with a concrete example. Does this seem about right?

Suppose we have multiple distributions  over the same random variables . (Speaking somewhat more precisely: the distributions are over the same set, and an element of that set is represented by values .)

We are a quantitative trading firm. Our investment strategy is such that we care about the prices of the stocks in the S&P 500 at market close today ().

We have a bunch of models of the stock market (), where we can feed in a set of possible prices of stocks in the S&P 500 at market close, and the model spits out a probability of seeing that exact combination of prices (where a single combination of prices is ).

We take a mixture of the distributions: , where  and  is nonnegative

We believe that some of our models are better than others, so our trading strategy is to take a weighted average of the predictions of each model, where the weight assigned to the th model  is , and obviously the weights have to sum to 1 for this to be an "average".

Mathematically: the natural latent over  is defined by , and naturality means that the distribution  satisfies the naturality conditions [LW · GW] (mediation and redundancy).

We believe that there is some underlying factor which we will call "market factors" () such that if you control for "market factors", you no longer learn (approximately) anything about the price of say MSFT when you learn about the price of AAPL, and also such that if you order the stocks in the S&P 500 alphabetically and then take the odd-indexed stocks (i.e. A, AAPL, ABNB, ...) in that list and call them the S&P250odd, and call the even-indexed (i.e. AAL, ABBV, ABT, ...) ones the S&P250even, you will come to (approximately) the same estimation of "market factors" by looking at either the S&P250odd or the S&P250even. Further, this means that if you estimate "market conditions" by looking at S&P250odd, then your estimation of the price of AAL will be approximately unchanged if you learn the price of ABT.

Then our theorem says: if an approximate natural latent exists over , and that latent is robustly natural under changing the mixture weights , then the same latent is approximately natural over  for all .

Anyway, if we find that the above holds for the weighted sum we use in practice, and we also find that it robustly [1] holds when we change the weights, that actually means that all of our market price models take "market factors" into account.

Alternatively stated, it means that if one of the models was written by an intern that procrastinated until the end of his internship and then on the last morning wrote def predict_price(ticker): return numpy.random.lognormal(), then our weighted sum is not robust to changes in the weights.

Is this a reasonable interpretation? If so, I'm pretty interested to see where you go with this. 

  1. ^

    Terms and conditions apply. This information is not intended as, and shall not be understood or construed as, financial advice.

Replies from: johnswentworth
comment by johnswentworth · 2024-08-22T23:34:56.202Z · LW(p) · GW(p)

Nailed it, well done.

Replies from: faul_sname
comment by faul_sname · 2024-08-23T00:15:21.542Z · LW(p) · GW(p)

One point of confusion I still have is what a natural latent screens off information relative to the prediction capabilities of.

Let's say one of the models "YTDA" in the ensemble knows the beginning-of-year price of each stock, and uses "average year-to-date market appreciation" as its latent., and so learning the average year-to-date market appreciation of the S&P250odd will tell it approximately everything about that latent, and learning the year-to-date appreciation of ABT will give it almost no information it knows how to use about the year-to-date appreciation of AMGN.

So relative to the predictive capabilities of the YTDA model, I think it is true that "average year-to-date market appreciation" is a natural latent.

However, another model "YTDAPS" in the ensemble might use "per-sector average year-to-date market appreciation" as its latent. Since both the S&P250even and S&P250odd contain plenty of stocks in each sector, it is again the case that once you know the YTDAPS' latent conditioning on S&P250odd, learning the price of ABT will not tell the YTDAPS model anything about the price of AMGN.

But then if both of these are latents, does that mean that your theorem proves that any weighted sum of natural latents is also itself a natural latent?

comment by Thane Ruthenis · 2024-08-23T16:02:24.553Z · LW(p) · GW(p)

 Let's see if I get this right...

  • Let's interpret the set  as the set of all possible visual sensory experiences , where  defines the color of the th pixel.
  • Different distributions over elements of this set correspond to observing different objects; for example, we can have  and , corresponding to us predicting different sensory experiences when looking at cars vs. apples.
  • Let's take some specific specific set of observations , from which we'd be trying to derive a latent.
  • We assume uncertainty regarding what objects generated the training-set observations, getting a mixture of distributions .
  • We derive a natural latent  for  such that  for all allowed .
  • This necessarily implies that  also induces independence between different sensory experiences for each individual distribution in the mixture:  and .
  • If the set  contains some observations generated by cars and some observations generated by apples, yet a nontrivial latent over the entire set nonetheless exists, then this latent must summarize information about some feature shared by both objects.
    • For example, perhaps it transpired that all cars depicted in this dataset are red, and all apples in this dataset are red, so  ends up as "the concept of redness".
  • This latent then could, prospectively, be applied to new objects. If we later learn of the existence of  – an object seeing which predicts yet another distribution over visual experiences – then  would "know" how to handle this "out of the box". For example, if we have a set of observations  such that it contains some red cars and some red ink, then  would be natural over this set under both distributions, without us needing to recompute it.
  • This trick could be applied for learning new "features" of objects. Suppose we have some established observation-sets  and , which have nontrivial natural latents  and . To find new "object-agnostic" latents, we can try to form new sets of observations from subsets of those observations, define corresponding distributions, and see if mixtures of distributions over those subsets have nontrivial latents.
    • Formally:  where  and , then , and we want to see if we have a new  that induces (approximate) independence between all  both under the "apple" and the "car" distributions.
    • Though note that it could be done the other way around as well: we could first learn the latents of "redness" and e. g. "greenness" by grouping all red-having and green-having observations, then try to find some subsets of those sets which also have nontrivial natural latents, and end up deriving the latent of "car" by grouping all red and green objects that happen to be cars.
      • (Which is to say, I'm not necessarily sure there's a sharp divide between "adjectives" and "nouns" in this formulation. "The property of car-ness" is interpretable as an adjective here, and "greenery" is interpretable as a noun.)
    • I'd also expect that the latent over , i. e. , could be constructed out of  and  (derived, respectively, from a pure-cars dataset and an all-red dataset)? In other words, if we simultaneously condition a dateset of red cars on a latent derived from a dataset of any-colored cars and a latent derived from a dateset of red-colored objects, then this combined latent  would induce independence across  (which  wouldn't be able to do on its own, due to the instances sharing color-related information in addition to car-ness)?
  • All of this is interesting mostly in the approximate-latent regime (this allows us to avoid the nonrobust-to-tiny-mixtures trap [LW · GW]), and in situations in which we already have some established latents which we want to break down into interoperable features.
    1. In principle, if we have e. g. two sets of observations that we already know correspond to nontrivial latents, e. g.  and , we could directly try to find subsets of their union that correspond to new nontrivial latents, in the hopes of recovering some features that'd correspond to grouping observations along some other dimension.
    2. But if we already have established "object-typed" probability distributions  and , then hypothesizing that the observations are generated by an arbitrary mixture of these distributions allows us to "wash out" any information that doesn't actually correspond to some robustly shared features of cars-or-apples.
    3. That is: consider if  is 99% cars, 1% apples. Then an approximately correct natural latent over it is basically just , maybe with some additional noise from apples thrown in. This is what we'd get if we used the "naive" procedure in (1) above. But if we're allowed to mix up the distributions, then "ramping" up the "apple" distribution (defining , say) would end up with low probabilities assigned to all observations corresponding to cars, and now the approximately correct natural latent over this dataset would have more apple-like qualities. The demand for the latent to be valid on arbitrary  then "washes out" all traces of car-ness and apple-ness, leaving only redness.

Is this about right? I'm getting a vague sense of some disconnect between this formulation and the OP...