Knowledge is not just mutual information

post by Alex Flint (alexflint) · 2021-06-10T01:01:32.300Z · LW · GW · 6 comments

Contents

  Example: Computer finding an object
  Counterexample: Computer case
  Counterexample: Perfect self-knowledge
  General problem: information is necessary but not sufficient for knowledge
  Conclusion
None
6 comments

Financial status: This is independent research. I welcome financial support to make further posts like this possible.

Epistemic status: This is in-progress thinking.


This post is part of a sequence on the accumulation of knowledge. Our goal is to articulate what it means for knowledge to accumulate within a physical system [? · GW].

The challenge is this: given a closed physical system, if I point to a region and tell you that knowledge is accumulating in this region, how would you test my claim? What are the physical characteristics of the accumulation of knowledge? We do not take some agent as the fundamental starting point but instead take a mechanistic physical system as the starting point, and look for a definition of knowledge in terms of physical patterns.

The previous post looked at measuring the resemblance between some region and its environment as a possible definition of knowledge and found that it was not able to account for the range of possible representations of knowledge. This post will explore mutual information between a region within a system and the remainder of the system as a definition of the accumulation of knowledge.

Formally, the mutual information between two objects is the gap between the entropy of the two objects considered as a whole, and the sum of the entropy of the two objects considered separately. If knowing the configuration of one object tells us nothing about the configuration of the other object, then the entropy of the whole will be exactly equal to the sum of the entropy of the parts, meaning there is no gap, in which case the mutual information between the two objects is zero. To the extent that knowing the configuration of one object tells us something about the configuration of the other, the mutual information between them is greater than zero. Specifically, if we would have had to ask some number N of yes-or-no questions to identify the configuration of the environment without any knowledge of the configuration of the region of interest, and if knowing the configuration of the region of interest reduces the number of yes-or-no questions we need to ask by M, then we say that there are "M bits" of mutual information between the region of interest and its environment.

Mutual information is usually defined in terms of two variables whose exact values are unknown but over which we have probability distributions. In this post, the two variables are the physical configuration of the region of interest and the physical configuration of the environment. Looking at things in terms of physical objects is important because I want to be able to examine, say, a physical region within a shipping container or a subregion of a cellular automata and discern the accumulation of knowledge without having any a priori understanding of where the "agents" or "computers" or "beliefs" are within the system. The only structure I’m willing to take for granted is the physical state of the system and some region of interest that we are investigating as a possible site of knowledge accumulation.

It is not possible to look at a single snapshot of two objects and compute the mutual information between them. Mutual information is defined with respect to probability distributions over configurations, not with respect to individual configurations. What we really want to do is to run many simulations of our system, and build up a probability distribution describing how our region of interest is configured in comparison to how the environment is configured, and compute the mutual information between these probability distributions.

Example: Computer finding an object

Suppose there is a computer with a camera in the shipping container that is programmed to scan the shipping container and find a certain object, then record its location within its memory. We could set up the shipping container many times with the object in different locations, and allow the computer to find the object each time. After however long it takes for the computer to complete its scan of the shipping container and store the location of the object in memory, the mutual information between the computer and its environment will have increased. We will be able to measure this increase in mutual information no matter how the computer represents the position of the object. We could in principle compute mutual information using just the physical configuration of the computer, without knowing that it is a computer, since the representation of the position of the object in memory grounds out as the physical configuration of certain memory cells. It would take a lot of trial runs to build up enough samples to do this, but it could in principle be done.

Counterexample: Computer case

But now consider: the same photons that are incident upon the camera that the computer is using to find the object are also incident upon every other object that has visual line-of-sight to the object being sought. At the microscopic level, each photon that strikes the surface of an object might change the physical configuration of that object by exciting an electron or knocking out a covalent bond. Over time, the photons bouncing off the object being sought and striking other objects will leave an imprint in every one of those objects that will have high mutual information with the position of the object being sought. So then does the physical case in which the computer is housed have as much "knowledge" about the position of the object being sought as the computer itself?

It seems that mutual information does not take into account whether the information being accumulated is useful and accessible.

Counterexample: Perfect self-knowledge

In the setup above, the "environment" was the interior of the shipping container minus the region of interest. But we are also interested in entities that accumulate knowledge about themselves. For example, a computer that is using an electron microscope to build up a circuit diagram of its own CPU ought to be considered an example of the accumulation of knowledge. However, the mutual information between the computer and itself is always equal to the entropy of the computer and is therefore constant over time, since any variable always has perfect mutual information with itself. This is also true of the mutual information between the region of interest and the whole system: since the whole system includes the region of interest, the mutual information between the two is always equal to the entropy of the region of interest, since every bit of information we learn about the region of interest gives us exactly one bit of information about the whole system also.

It seems again that measuring mutual information does not take into account whether the information being accumulated is useful and accessible, because what we are interested in is knowledge that allows an entity to exert goal-directed influence over the future, and a rock, despite being "a perfect map of itself" in this sense, doesn’t exert goal-directed influence over the future.

General problem: information is necessary but not sufficient for knowledge

The accumulation of information within a region of interest seems to be a necessary but not sufficient condition for the accumulation of knowledge within that region. Measuring mutual information fails to account for the usefulness and accessibility that makes information into knowledge.

Conclusion

The accumulation of knowledge clearly does have a lot to do with mutual information, but it cannot be accounted for just as mutual information between the physical configuration of two parts of the system. The next post will explore digital abstraction layers, in which we group low-level configurations together and compute mutual information between high- and low-level configurations of the system.

6 comments

Comments sorted by top scores.

comment by abramdemski · 2021-11-10T16:42:11.484Z · LW(p) · GW(p)

Recently I have been thinking that we should in fact use "really basic" definitions, EG "knowledge is just mutual information", and also other things with a general theme of "don't make agency so complicated".  The hope is to eventually be able to build up to complicated types of knowledge (such as the definition you seek here), but starting with really basic forms. Let me see if I can explain.

First, an ontology is just an agents way of organizing information about the world. These can take lots of forms and I'm not going to constrain it to any particular formalization (but doing so could be part of the research program I'm suggesting).

Second, a third-person perspective is a "view from nowhere" which has the capacity to be rooted at specific locations, recovering first-person perspectives. In other words, I relate subjective and objective in the following way: objectivity is just a mapping from specific "locations" (within the objective perspective) to subjective views "from that location".

Note that an objective view is not supposed to be necessarily correct; it is just a hypothesis about what the universe looks like, described from the 3rd person perspective.

Notice what I'm doing: I'm defining a 3rd person perspective as a solution to the mind-body problem. Why?

Well, what's a 3rd-person perspective good for? Why do we invent such things in the first place?

It's good for communication. I speak of the world in objective terms largely because this is one of the best ways to communicate with others. Rather than having a different word for the front of a car, the side of a car, etc (all the ways I can experience a car), I have "car", so that I can refer to a car in an experience-agnostic way. Other people understand this language and can translate it to their own experience appropriately.

Similarly, if I say something is "to my left" rather than "left", other people know how to translate that to their own personal coordinate system.

So far so good.

Now, a reasonable project would be to create as useful a 3rd person perspective as possible. One thing this means is that it should help translate between as many perspectives as possible.

I don't claim to have a systematic grasp of what that implies, but one obvious thing people do is qualify statements: eg, "I believe X" rather than just stating "X" outright. "I want X" rather than "X should happen". This communicates information that a broad variety of listeners can accept.

Now, a controversial step. A notion of objectivity needs to decide what counts as a "conscious experiencer" or "potential viewpoint". That's because the whole point of a notion of objectivity is to be an ontology which can be mapped into a set of 1st-person viewpoints.

So I propose that we make this as broad as possible. In particular, we should be able to consider the viewpoint of any physical object. (At least.)

This is little baby panpsychism. I'm not claiming that "all physical objects have conscious experiences" in any meaningful sense, but I do want my notion of conscious experience to extend to all physical objects, just because that's a pretty big boundary I can draw, so that I'm sure I'm not excluding anyone actually important with my definition.

For an object to "experience" an event is for it to, like, "feel some shockwaves" from the event -- ie, for there to be mutual information there. On the other hand, for an object to "directly experience" an event could be defined as being contained within the physical space of the event, or perhaps to intersect that physical space, or something along those lines.

For an object to "know about something" in this broad sense is just for there to be mutual information.

For me to think there is knowledge is for my objective model to say that there is mutual information.

These definitions obviously have some problems. 

Let's look at a different type of knowledge, which I will call tacit knowledge -- stuff like being able to ride a bike (aka "know-how"). I think this can be defined (following my "very basic" theme) from an object's ability to participate successfully in patterns. A screw "knows how" to fit in threaded holes of the correct size. It "knows how" to go further in when rotated in one way, and come further out when rotated the other way. Etc.

Now, an object has some kind of learning if it can increase its tacit knowledge (in some sense) through experience. Perhaps we could say something like, it learns for a specified goal predicate if it has a tendency to increase the measure of situations in which it satisfies this goal predicate, through experience? (Mathematically this is a bit vague, sorry.)

Now we can start to think about measuring the extent to which mutual information contributes to learning of tacit knowledge. Something happens to our object. It gains some mutual information w/ external stuff. If this mutual information increases its ability to pursue some goal predicate, we can say that the information is accessible wrt that goal predicate. We can imagine the goal predicate being "active" in the agent, and having a "translation system" whereby it unpacks the mutual information into what it needs.

On the other hand, if I undergo an experience while I'm sleeping, and the mutual information I have with that event is just some small rearrangements of cellular structure which I never notice, then the mutual information is not accessible to any significant goal predicates which my learning tracks.

I don't think this solves all the problems you want to solve, but it seems to me like a fruitful way of trying to come up with definitions -- start with really basic forms of "knowledge" and related things, and try to stack them up to get to the more complex notions.

Replies from: alexflint, adamShimi
comment by Alex Flint (alexflint) · 2021-11-10T20:02:36.199Z · LW(p) · GW(p)

First, an ontology is just an agents way of organizing information about the world...

Second, a third-person perspective is a "view from nowhere" which has the capacity to be rooted at specific locations...

Yep I'm with you here

Well, what's a 3rd-person perspective good for? Why do we invent such things in the first place? It's good for communication.

Yeah I very much agree with justifying the use of 3rd person perspectives on practical grounds.

we should be able to consider the [first person] viewpoint of any physical object.

Well if we are choosing to work with third-person perspectives then maybe we don't need first person perspectives at all. We can describe gravity and entropy without any first person perspectives at all, for example.

I'm not against first person perspectives, but if we're working with third person perspectives then we might start by sticking to third person perspectives exclusively.

Let's look at a different type of knowledge, which I will call tacit knowledge -- stuff like being able to ride a bike (aka "know-how"). I think this can be defined (following my "very basic" theme) from an object's ability to participate successfully in patterns.

Yeah right. A screw that fits into a hole does have mutual information with the hole. I like the idea that knowledge is about the capacity to harmonize within a particular environment because it might avoid the need to define goal-directedness.

Now we can start to think about measuring the extent to which mutual information contributes to learning of tacit knowledge. Something happens to our object. It gains some mutual information w/ external stuff. If this mutual information increases its ability to pursue some goal predicate, we can say that the information is accessible wrt that goal predicate. We can imagine the goal predicate being "active" in the agent, and having a "translation system" whereby it unpacks the mutual information into what it needs.

The only problem is that now we have to say what a goal predicate is. Do you have a sense of how to do that? I have also come to the conclusion that knowledge has a lot to do with being useful in service of a goal, and that then requires some way to talk about goals and usefulness.

The hope is to eventually be able to build up to complicated types of knowledge (such as the definition you seek here), but starting with really basic forms.

I very much resonate with keeping it as simple as possible, especially when doing this kind of conceptual engineering, which can become so lost. I have been grounding my thinking in wanting to know whether or not a certain entity in the world has an understanding of a certain phenomenon, in order to use that to overcome the deceptive misalignment problem. Do you also have go-to practical problems against which to test these kinds of definitions?

comment by adamShimi · 2021-11-10T22:28:16.829Z · LW(p) · GW(p)

So, I'm trying to interpret your proposal from an epistemic strategy perspective — asking how are you trying to produce knowledge.

It sounds to me like you're proposing to start with very general formalization with simple mathematical objects (like objectivity being a sort of function, and participating in a goal increasing the measure on the states satisfying the predicate). Then, when you reach situations where the definitions are not constraining enough, like what Alex describes, you add further constraints on these objects?

I have trouble understanding how different it is from the "standard way" Alex is using of proposing a simple definition, finding where it breaks, and then trying to refine it and break it again. Rince and repeat. Could you help me with what you feel are the main differences?

comment by adamShimi · 2021-06-12T12:42:40.212Z · LW(p) · GW(p)

Thanks again for a nice post in this sequence!

The previous post looked at measuring the resemblance between some region and its environment as a possible definition of knowledge and found that it was not able to account for the range of possible representations of knowledge.

I found myself going back to the previous post to clarify what you mean here. I feel like you could do a better job of summarizing the issue of the previous post (maybe by mentioning the computer example explicitly?).

Formally, the mutual information between two objects is the gap between the entropy of the two objects considered as a whole, and the sum of the entropy of the two objects considered separately. If knowing the configuration of one object tells us nothing about the configuration of the other object, then the entropy of the whole will be exactly equal to the sum of the entropy of the parts, meaning there is no gap, in which case the mutual information between the two objects is zero. To the extent that knowing the configuration of one object tells us something about the configuration of the other, the mutual information between them is greater than zero.

I need to get deeper into information theory, but that is probably the most intuitive explanation of mutual information I've seen. I delayed reading this post because I worried that my half-remembered information theory wasn't up to it, but you deal with that nicely.

At the microscopic level, each photon that strikes the surface of an object might change the physical configuration of that object by exciting an electron or knocking out a covalent bond. Over time, the photons bouncing off the object being sought and striking other objects will leave an imprint in every one of those objects that will have high mutual information with the position of the object being sought. So then does the physical case in which the computer is housed have as much "knowledge" about the position of the object being sought as the computer itself?

Interestingly, I expect this effect to disappear when the measurement defining our two variable get less precise. In a sense, the mutual information between the case and ship container depend on measuring very subtle differences, whereas the mutual information between the computer and the ship container is far more robust to loss of precision.

For example, a computer that is using an electron microscope to build up a circuit diagram of its own CPU ought to be considered an example of the accumulation of knowledge. However, the mutual information between the computer and itself is always equal to the entropy of the computer and is therefore constant over time, since any variable always has perfect mutual information with itself.

But wouldn't there be a part of the computer that accumulates knowledge about the whole computer?

This is also true of the mutual information between the region of interest and the whole system: since the whole system includes the region of interest, the mutual information between the two is always equal to the entropy of the region of interest, since every bit of information we learn about the region of interest gives us exactly one bit of information about the whole system also.

Maybe it's my lack of understanding of information theory speaking, but that sounds wrong. Surely there's a difference between cases where the region of interest determines the full environment, and when it is completely independent of the rest of the environment?

The accumulation of information within a region of interest seems to be a necessary but not sufficient condition for the accumulation of knowledge within that region. Measuring mutual information fails to account for the usefulness and accessibility that makes information into knowledge.

Despite my comments above, that sounds broadly correct. I'm not sure that the mutual information would capture your example of the textbook for example, even when it contains a lot of knowledge.

comment by DanielFilan · 2021-06-10T18:26:36.929Z · LW(p) · GW(p)

Seems like maybe the solution should perhaps be that you should only take 'the system' to be the 'controllable' physical variables, or those variables that are relevant for 'consequential' behaviour? Hopefully if one can provide good definitions for these, it will provide a foundation for saying what the abstractions should be that let us distinguish between 'high-level' and 'low-level' behaviour.

comment by Vikrant Varma (amrav) · 2022-05-10T16:36:36.414Z · LW(p) · GW(p)

Thanks for this sequence!

I don't understand why the computer case is a counterexample for mutual information, doesn't it depend on your priors (which don't know anything about the other background noise interacting with photons)?

Taking the example of a one-time pad, given two random bit strings A and B, if C = A ⊕ B, learning C doesn't tell you anything about A unless you already have some information about B. So I(C; A) = 0 when B is uniform and independent of A.

Over time, the photons bouncing off the object being sought and striking other objects will leave an imprint in every one of those objects that will have high mutual information with the position of the object being sought.

If our prior was very certain about any factors that could interact with photons, then indeed the resulting imprints would have high mutual information, but it seems like you can rescue mutual information here by saying that our prior is uncertain about these other factors so the resulting imprints are noisy as well.

On the other hand, it seems correct that an entity that did have a more certain prior over interacting factors would see photon imprints as accumulating knowledge (for example photographic film).