Abstraction = Information at a Distance

post by johnswentworth · 2020-03-19T00:19:49.189Z · LW · GW · 1 comments

Contents

  Formalization
None
1 comment

Why is abstraction useful? Why use a high-level model rather than a low-level model?

An example: when I type “4+3” in a python shell, I think of that as adding two numbers, not as a bunch of continuous voltages driving electric fields and current flows in little patches of metal and doped silicon. Why? Because, if I’m thinking about what will show up on my monitor after I type “4+3” and hit enter, then the exact voltages and current flows on the CPU are not relevant. This remains true even if I’m thinking about the voltages driving individual pixels in my monitor - even at a fairly low level, the exact voltages in the arithmetic-logic unit on the CPU aren’t relevant to anything more than a few microns away - except for the high-level information contained in the “numbers” passed in and out.

Another example: if I’m an astronomer predicting the trajectory of the sun, then I’m presumably going to treat other stars as point-masses. At such long distances, the exact mass distribution within the star doesn’t really matter - except for the high-level information contained in the total mass and center-of-mass location.

If I’m running a Markov-Chain Monte Carlo algorithm, then I take sample points fairly far apart in “time”. As long as they’re far enough apart, they’re roughly independent - there isn’t any information from one sample relevant to the next.

If I’m planning a roadtrip from San Francisco to Los Angeles, the details of my route through the Bay Area are irrelevant to planning my route within LA - except for the high-level information contained in my choice of highway for the middle leg of the trip and the rough time I expect to get there.

General point: abstraction, in practice, is about keeping information which is relevant to things “far away”, and throwing out everything else.

Formalization

Let’s start with a bunch of random variables , and some notion of which variables are “nearby”: each variable has a set of indices of variables considered “nearby” . How is chosen may vary by application - maybe each is associated with some point in space and/or time, or maybe we’re looking at Markov blankets in a graphical model, or …

We want some high-level summary of ; we’ll define that by a function . We require that contain all information relevant to things far away - i.e. , the variables not in .

We’ll consider a few different notions of “relevance” here. First and most obvious is predictive relevance - must contain all relevant information in the usual probabilistic/information-theoretic sense. Key subtlety: which information is relevant may itself depend on the values of other variables - e.g. maybe we have a conditional in a program which picks one of two variables to return. Should we keep around all information which is relevant in any possible case? All information which is relevant after averaging over some variables?

Looking back over the examples, I think the natural answer is: we’re keeping information relevant to things “far away” (i.e. variables not in ), so those are what we’re interested in. Everything within we can average over. Examples:

Formally, our condition is:

We could even go a step further and apply the minimal map theorems [LW · GW] to find containing the least possible information, although it won't necessarily be the most computationally efficient summary.

Another notion of “relevance” is causal influence - while probabilistic information is the key criteria for prediction, causal influence is the key for planning. We want to know what impact an intervention on will have on far-away variables. We’re still happy to average over “nearby” variables, but there’s a new subtlety: we may also want to intervene on some of the variables far-away from . For instance, if we’re planning a road-trip, we want to be able to consider possible route plans within LA - different routes would be different interventions on variables far away from SF. Our high-level model needs to hold for any of these interventions. Our criteria become:

… for any , and any intervention values for which . Here means setting to an arbitrary value such that = - i.e. “we just need to get to the highway by noon, the details don’t matter, we can work them out later”. This requires that the details do not, in fact, matter - i.e. has the same value for different so long as remains the same. That’s what the notation is expressing.

Finally, we could combine our criteria: require that any interventions on be supported, with either information or intervention on . The criteria:

Both of these must hold for any , and any intervention values for which . In that case, we can predict the effects of arbitrary interventions on and any of the on other , using only the summary information present in .

1 comments

Comments sorted by top scores.

comment by Pongo · 2020-08-03T07:06:26.528Z · LW(p) · GW(p)

The informal section at the beginning is the piece of your writing that clicked the most for me. I really like only caring about the part of the local information that matters for the global information.

I remain confused about how to think about abstraction leaks in this world (like row hammer messing with our ability to ignore the details of charges in circuitry)