Natural Latents Are Not Robust To Tiny Mixtures

post by johnswentworth, David Lorell · 2024-06-07T18:53:36.643Z · LW · GW · 8 comments

Contents

  The Tiny Mixtures Counterexample
  What To Do Instead?
    Different Kind of Approximation
    Additional Requirements for Natural Latents
    Same Distribution
  ADDED July 9: The Competitively Optimal Natural Latent from Resampling Always Works (At Least Mediocrely)
None
8 comments

In our previous [LW · GWnatural [LW · GWlatent [LW · GWposts [LW · GW], our core theorem typically says something like:

Assume two agents have the same predictive distribution  over variables , but model that distribution using potentially-different latent variables. If the latents both satisfy some simple “naturality” conditions (mediation and redundancy) then the two agents’ latents contain approximately the same information about . So, insofar as the two agents both use natural latents internally, we have reason to expect that the internal latents of one can be faithfully translated into the internal latents of the other.

This post is about one potential weakness in that claim: what happens when the two agents’ predictive distributions are only approximately the same?

Following the pattern of our previous theorems, we’d ideally say something like

If the two agents’ distributions are within  of each other (as measured by some KL-divergences), then their natural latents contain approximately the same information about , to within some  bound.

But that turns out to be false.

The Tiny Mixtures Counterexample

Let’s start with two distributions,  and , over . These won’t be our two agents’ distributions - we’re going to construct our two agents’ distributions by mixing these two together, as the name “tiny mixtures” suggests.

 and  will have extremely different natural latents. Specifically:

Mental picture: we have a million-bit channel, under  the output () is equal to the input (), while under  the channel hardware is maintained by Comcast so they’re independent.

Now for our two agents’ distributions,  and  will be almost , and  will be almost , but each agent puts a  probability on the other distribution:

First key observation:  and  are both roughly 50 bits. Calculation:

Intuitively: since each distribution puts roughly  on the other, it takes about 50 bits of evidence to update from either one to the other.

Second key observation: the empty latent is approximately natural under , and the latent  is approximately natural under . Epsilons:

… and of course the information those two latents tell us about  differs by 1 million bits: one of them is empty, and the other directly tells us 1 million bits about .

Now, let’s revisit the claim we would’ve liked to make:

If the two agents’ distributions are within  of each other (as measured by some KL-divergences), then their natural latents contain approximately the same information about , to within some  bound.

Tiny mixtures rule out any claim along those lines. Generalizing the counterexample to an  bit channel (where  above) and a mixin probability of  (where  above), we generally see that the two latents are natural over their respective distributions to about , the  between the distributions is about  in either direction, yet one latent contains  bits of information about  while the other contains zero. By choosing , with both  and  large, we can get arbitrarily precise natural latents over the two distributions, with the difference in the latents exponentially large with respect to the ’s between distributions.

What To Do Instead?

So the bound we’d ideally like is ruled out. What alternatives might we aim for?

Different Kind of Approximation

Looking at the counterexample, one thing which stands out is that  and  are, intuitively, very different distributions. Arguably, the problem is that a “small”  just doesn’t imply that the distributions are all that close together; really we should use some other kind of approximation.

On the other hand,  is a pretty nice principled error-measure with nice properties, and in particular it naturally plugs into information-theoretic or thermodynamic machinery. And indeed, we are hoping to plug all this theory into thermodynamic-style machinery down the road. For that, we need global bounds, and they need to be information-theoretic.

Additional Requirements for Natural Latents

Coming from another direction: a 50-bit update can turn  into , or vice-versa. So one thing this example shows is that natural latents, as they’re currently formulated, are not necessarily robust to even relatively small updates, since 50 bits can quite dramatically change a distribution.

Interestingly, there do exist other natural latents over these two distributions which are approximately the same (under their respective distributions) as the two natural latents we used above, but more robust (in some ways) to turning one distribution into the other. In particular: we can always construct a natural latent with competitively optimal approximation via resampling [LW · GW]. Applying that construction to , we get a latent which is usually independent random noise (which gives the same information about  as the empty latent), but there’s a  chance that it contains the value of  and another  chance that it contains the value of . Similarly, we can use the resampling construction to find a natural latent for , and it will have a  chance of containing random noise instead of , and an independent  chance of containing random noise instead of .

Those two latents still differ in their information content about  by roughly 1 million bits, but the distribution of  given each latent differs by only about 100 bits in expectation. Intuitively: while the agents still strongly disagree about the distribution of their respective latents, they agree (to within ~100 bits) on what each value of the latent says about .

Does that generalize beyond this one example? We don’t know yet.

But if it turns out that the competitively optimal natural latent is generally robust to updates, in some sense, then it might make sense to add a robustness-to-updates requirement for natural latents - require that we use the “right” natural latent, in order to handle this sort of problem.

Same Distribution

A third possible approach is to formulate the theory around a single distribution .

For instance, we could assume that the environment follows some “true distribution”, and both agents look for latents which are approximately natural over the “true distribution” (as far as they can tell, since the agents can’t observe the whole environment distribution directly). This would probably end up with a Fristonian flavor.

ADDED July 9: The Competitively Optimal Natural Latent from Resampling Always Works (At Least Mediocrely)

Recall that, for a distribution , we can always construct a competitively optimal natural latent (under strong redundancy)  by resampling each component  conditional on the others , i.e.

We argued above that this specific natural latent works just fine in the tiny mixtures counterexample: roughly speaking, the resampling natural latent constructed for  approximates the resampling natural latent constructed for  (to within an error comparable to how well  approximates ).

Now we'll show that that generalizes. Our bound will be mediocre, but it's any bound at all, so that's progress.

Specifically: suppose we have two distributions over the same variables,  and . We construct a competitively optimal natural latent  via resampling for each distribution:

Then, we'll use  (with expectation taken over  under distribution ) as a measure of how well 's latent  matches 's latent . Core result:

Proof:

So we have a bound. Unfortunately, the factor of  (number of variables) makes the bound kinda mediocre. We could sidestep that problem in practice by just using natural latents over a small number of variables at any given time (which is actually fine for many and arguably most use cases). But based on the proof, it seems like we should be able to improve a lot on that factor of n; we outright add , which should typically be much larger than the quantity we're trying to bound.

8 comments

Comments sorted by top scores.

comment by Thane Ruthenis · 2024-06-07T20:40:43.571Z · LW(p) · GW(p)

Coming from another direction: a 50-bit update can turn  into , or vice-versa. So one thing this example shows is that natural latents, as they’re currently formulated, are not necessarily robust to even relatively small updates, since 50 bits can quite dramatically change a distribution.

Are you sure this is undesired behavior? Intuitively, small updates (relative to the information-content size of the system regarding which we're updating) can drastically change how we're modeling a particular system, into what abstractions we decompose it. E. g., suppose we have two competing theories regarding how to predict the neural activity in the human brain, and a new paper comes out with some clever (but informationally compact) experiment that yields decisive evidence in favour of one of those theories. That's pretty similar to the setup in the post here, no? And reading this paper would lead to significant ontology shifts in the minds of the researchers who read it.

Which brings to mind How Many Bits Of Optimization Can One Bit Of Observation Unlock? [LW · GW], and the counter-example there...

Indeed, now that I'm thinking about it, I'm not sure the quantity  is in any way interesting at all? Consider that the researchers' minds could be updated either from reading the paper and examining the experimental procedure in detail (a "medium" number of bits), or by looking at the raw output data and then doing a replication of the paper (a "large" number of bits), or just by reading the names of the authors and skimming the abstract (a "small" number of bits).

There doesn't seem to be a direct causal connection between the system's size and the amount of bits needed to drastically update on its structure at all? You seem to expect some sort of proportionality between the two, but I think the size of one is straight-up independent of the size of the other if you let the nature of the communication channel between the system and the agent-doing-the-updating vary freely (i. e., if you're uncertain regarding whether it's "direct observation of the system" OR "trust in science" OR "trust in the paper's authors" OR ...).[1]

Indeed, merely describing how you need to update using high-level symbolic languages, rather than by throwing raw data about the system at you, already shaves off a ton of bits, decoupling "the size of the system" from "the size of the update".

Perhaps  really isn't the right metric to use, here? The motivation for having natural abstractions in your world-model is that they make the world easier to predict for the purposes of controlling said world. So similar-enough natural abstractions would recommend the same policies for navigating that world. Back-tracking further, the distributions that would give rise to similar-enough natural abstractions would be distributions that correspond to worlds the policies for navigating which are similar-enough...

I. e., the distance metric would need to take interventions/the  operator into account. Something like SID comes to mind (but not literally SID, I expect).

  1. ^

    Though there may be some more interesting claim regarding that entire channel? E. g., that if the agent can update drastically just based on a few bits output by this channel, we have to assume that the channel contains "information funnels" which compress/summarize the raw state of the system down? That these updates have to be entangled with at least however-many-bits describing the ground-truth state of the system, for them to be valid?

Replies from: johnswentworth, tailcalled
comment by johnswentworth · 2024-06-07T20:43:14.282Z · LW(p) · GW(p)

Which brings to mind How Many Bits Of Optimization Can One Bit Of Observation Unlock? [LW · GW], and the counter-example there...

We actually started from that counterexample, and the tiny mixtures example grew out of it.

comment by tailcalled · 2024-06-08T13:56:37.462Z · LW(p) · GW(p)

In the context of alignment, we want to be able to pin down which concepts we are referring to, and natural latents were (as I understand it) partly meant to be a solution to that. However if there are multiple different concepts that fit the same natural latent but function very differently then that doesn't seem to solve the alignment aspect.

Replies from: Thane Ruthenis
comment by Thane Ruthenis · 2024-06-08T14:04:18.288Z · LW(p) · GW(p)

I do see the intuitive angle of "two agents exposed to mostly-similar training sets should be expected to develop the same natural abstractions, which would allow us to translate between the ontologies of different ML models and between ML models and humans", and that this post illustrated how one operationalization of this idea failed.

However if there are multiple different concepts that fit the same natural latent but function very differently 

That's not quite what this post shows, I think? It's not that there are multiple concepts that fit the same natural latent, it's that if we have two distributions that are judged very close by the KL divergence, and we derive the natural latents for them, they may turn out drastically different. The  agent and the  agent legitimately live in very epistemically different worlds!

Which is likely not actually the case for slightly different training sets, or LLMs' training sets vs. humans' life experiences. Those are very close on some metric , and now it seems that  isn't (just) .

Replies from: tailcalled
comment by tailcalled · 2024-06-08T14:31:55.082Z · LW(p) · GW(p)

Maybe one way to phrase it is that the X's represent the "type signature" of the latent, and the type signature is the thing we can most easily hope is shared between the agents, since it's "out there in the world" as it represents the outwards interaction with things. We'd hope to be able to share the latent simply by sharing the type signature, because the other thing that determines the latent is the agents' distribution, but this distribution is more an "internal" thing that might be too complicated to work with. But the proof in the OP shows that the type signature is not enough to pin it down, even for agents whose models are highly compatible with each other as-measured-by-KL-in-type-signature.

Replies from: Thane Ruthenis
comment by Thane Ruthenis · 2024-06-08T16:23:20.336Z · LW(p) · GW(p)

Sure, but what I question is whether the OP shows that the type signature wouldn't be enough for realistic scenarios where we have two agents trained on somewhat different datasets. It's not clear that their datasets would be different the same way  and  are different here.

comment by Garrett Baker (D0TheMath) · 2024-06-07T20:35:51.412Z · LW(p) · GW(p)

I may misunderstand (I’ve only skimmed), but its not clear to me we want natural latents to be robust to small updates. Phase changes and bifurcation points seem like something you should expect here. I would however feel more comfortable if such points had small or infinitesimal measure.

comment by Thane Ruthenis · 2024-06-08T16:30:45.451Z · LW(p) · GW(p)

Another angle to consider: in this specific scenario, would realistic agents actually derive natural latents for  and  as a whole, as opposed to deriving two mutually incompatible latents for the  and  components, then working with a probability distribution over those latents?

Intuitively, that's how humans operate if they have two incompatible hypotheses about some system. We don't derive some sort of "weighted-average" ontology for the system, we derive two separate ontologies and then try to distinguish between them.

This post [LW · GW] comes to mind:

If you only care about betting odds, then feel free to average together mutually incompatible distributions reflecting mutually exclusive world-models. If you care about planning then you actually have to decide which model is right or else plan carefully for either outcome.

Like, "just blindly derive the natural latent" is clearly not the whole story about how world-models work. Maybe realistic agents have some way of spotting setups structured the way the OP is structured, and then they do something more than just deriving the latent.