A Toy-Model of Instrumental Abstraction

post by Zachary Robertson (zachary-robertson) · 2021-01-12T17:50:29.948Z · LW · GW · None comments

Contents

  Toy-Model
Argument
Discussion
None


Epistemological Status: This is a toy-model that is meant to provide structure to thinking about instrumental abstraction [? · GW] as presented in the abstraction sequence. In fact, I would say the basic idea is perhaps obvious.

Neural representations certainly contain information, but it's not always clear what the information represents. DeepMind recently proposed probing an agent's internal states as a means to study and quantify knowledge in the internal representations of neural-network based agents. This could be thought of as the ML equivalent of neural decoding which studies the information available in the activity patterns of networks of neurons.

Say we have observations and then design a training set based on a query . So each query maps an observation to an answer. If we fix our query then generally we have a computational process that looks like this, where and could be thought of as a neural network. We'll say that is a model for the query. So takes in information and then generates a sequence of representations that eventually match up with the answer at the very end.

Toy-Model

Assign two measures to : one dependent on the size of the input and another on the size of it's least representation . If this was a neural network, it would be equivalent to asking for the smallest hidden layer in the network. We'll assume that both are strictly increasing functions of the dimension of their arguments.

The input size is natural enough. The more information needs to produce answers the larger the first number. How about the second measure? Say we have a collection of questions that we'd like to ask about our observations. We'd have something like, Say each question can be answered with then we could assign a cost based on how much information is being used. We'd have something like, Here's where the second measure is important. Consider the query . This just returns the input. Simple enough. However, if the internal representation has a size lower than the input this could help us. This is called a bottleneck. Why? Rewrite our computational process as, Where the size of the internal representation is the minimal representation in . Formally, . Now we've exposed the internal representation. By definition, this has enough information to reconstruct the input. Therefore every other question we could possibly ask could be answered as follows, If the internal representation is smaller than the original input then the total cost for the other models would be smaller.

Argument

Given, generate a model that has minimal input measure. Let's consider two approaches to this. In the first approach we model each query separately. So we have diagrams of the following form, which implies the total cost measure is . We could also single out a question and proceed according to, which results in a total cost measure of . Comparing the costs we see that, This implies that to minimize we want to minimize . So even if there would still be an incentive to learn a compressed representation of the input.

Discussion

The toy-model presented above is interesting in one respect. If we pay for the number of variables we use to answer questions then modeling a collection of queries is best done by specializing one model to create an abstraction of the observations.

It's worth noting that for more complicated data, it's possible that the minimum configuration would be hierarchical. There would still be an initial compression, but then after that there could be different specializations used for different queries. I'd assume this is also the case when queries only operate on subsets of the observation space.

This was an attempt to think about how a growing network could be modeled. When neurons are decoded we build an additional model on top of what was already there. If models are rewarded simply for having inner representations that are useful for other models then it seems likely we'd end up with a similar result.

The idea here would be that if we're managing the growth of a network then each patch should be as small as possible and each patch should be useful in the creation of future patches.