A simple example of conditional orthogonality in finite factored sets

post by DanielFilan · 2021-07-06T00:36:40.264Z · LW · GW · 3 comments

Contents

3 comments

Recently, MIRI researcher Scott Garrabrant has publicized his work on finite factored sets [? · GW]. It allegedly offers a way to understand agency and causality in a set-up like the causal graphs championed by Judea Pearl. Unfortunately, the definition of conditional orthogonality [? · GW] is very confusing. I'm not aware of any public examples of people demonstrating that they understand it, but I didn't really understand it until an hour ago, and I've heard others say that it went above their heads. So, I'd like to give an example of it here.

In a finite factored set, you have your base set , and a set of 'factors' of your set. In my case, the base set will be four-dimensional space - I'm sorry, I know that's one more dimension than the number that well-adjusted people can visualize, but it really would be a much worse example if I were restricted to three dimensions. We'll think of the points in this space as tuples where each is a real number between, say, -2 and 2 [footnote 1]. We'll say that is the 'factor', aka partition, that groups points together based on what their value of is, and similarly for , , and , and set . I leave it as an exercise for the reader to check whether this is in fact a finite factored set. Also, I'll talk about the 'value' of partitions and factors - technically, I suppose you could say that the 'value' of some partition at a point is the set in the partition that contains the point, but I'll use it to mean that, for example, the 'value' of at point is . If you think of partitions as questions where different points in give different answers, the 'value' of a partition at a point is the answer to the question.

[EDIT: for the rest of the post, you might want to imagine as points in space-time, where represents the time, and represent spatial coordinates - for example, inside a room, where you're measuring from the north-east corner of the floor. In this analogy, we'll imagine that there's a flat piece of sheet metal leaning on the floor against two walls, over that corner. We'll try conditioning on that - so, looking only at points in space-time that are spatially located on that sheet - and see that distance left is no longer orthogonal to distance up, but that both are still orthogonal to time.]

Now, we'll want to condition on the set . The thing with is that once you know you're in , is no longer independent of , like it was before, since they're linked together by the condition that . However, has nothing to do with that condition. So, what's going to happen is that conditioned on being in , is orthogonal to but not to .

In order to show this, we'll check the definition of conditional orthogonality, which actually refers to this thing called conditional history. I'll write out the definition of conditional history formally, and then try to explain it informally: the conditional history of given , which we'll write as , is the smallest set of factors satisfying the following two conditions:

  1. For all , if for all , then .
  2. For all and , if for all and for all , then .

Condition 1 means that, if you think of the partitions as carving up the set , then the partition doesn't carve up more finely than if you carved according to everything in . Another way to say that is that if you know you're in , knowing everything in the conditional history of in tells you what the 'value' of is, which hopefully makes sense.

Condition 2 says that if you want to know if a point is in , you can separately consider the 'values' of the partitions in the conditional history, as well as the other partitions that are in but not in the conditional history. So it's saying that there's no 'entanglement' between the partitions in and out of the conditional history regarding . This is still probably confusing, but it will make more sense with examples.

Now, what's conditional orthogonality? That's pretty simple once you get conditional histories: and are conditionally orthogonal given if the conditional history of given doesn't intersect the conditional history of given . So it's saying that once you're in , the things determining are different to the things determining , in the finite factored sets way of looking at things.

Let's look at some conditional histories in our concrete example: what's the history of given ? Well, it's got to contain , because otherwise that would violate condition 1: you can't know the value of without being told the value of , even once you know you're in . But that can't be the whole thing. Consider the point . If you just knew the value of at , that would be compatible with actually being , which is in . And if you just knew the values of , , and , you could imagine that was actually equal to , which is also in . So, if you considered the factors in separately to the other factors, you'd conclude that could be in - but it's actually not! This is exactly the thing that condition 2 is telling us can't happen. In fact, the conditional history of given is , which I'll leave for you to check. I'll also let you check that the conditional history of given is .

Now, what's the conditional history of given ? It has to include , because if someone doesn't tell you you can't figure it out. In fact, it's exactly . Let's check condition 2: it says that if all the factors outside the conditional history are compatible with some point being in , and all the factors inside the conditional history are compatible with some point being in , then it must be in . That checks out here: you need to know the values of all three of , , and at once to know if something's in , but you get those together if you jointly consider those factors outside your conditional history, which is . So looking at , if you only look at the values that aren't told to you by the conditional history, which is to say the first three numbers, you can tell it's not in and aren't tricked. And if you look at , you look at the factors in (namely ), and it checks out, you look at the factors outside and that also checks out, and the point is really in .

Hopefully this gives you some insight into condition 2 of the definition of conditional history. It's saying that when we divide factors up to get a history, we can't put factors that are entangled by the set we're conditioning on on 'different sides' - all the entangled factors have to be in the history, or they all have to be out of the history.

In summary: , and . So, is orthogonal to given ? No, their conditional histories overlap - in fact, they're identical! Is orthogonal to given ? Yes, they have disjoint conditional histories.

Some notes:

[^1] I know what you're saying - "That's not a finite set! Finite factored sets have to be finite!" Well, if you insist, you can think of them as only the numbers between -2 and 2 with two decimal places. That makes the set finite and doesn't really change anything. (Which suggests that a more expansive concept could be used instead of finite factored sets.)

3 comments

Comments sorted by top scores.

comment by Scott Garrabrant · 2021-07-06T18:13:59.332Z · LW(p) · GW(p)

Thanks for writing this.

On the finiteness point, I conjecture that "finite dimensional" (|B| is finite) is sufficient for all of my results so far, although some of my proofs actually use "finite" (|S| is finite). The example with real numbers is still finite dimensional, so I don't expect any problems. 

Replies from: DanielFilan
comment by DanielFilan · 2021-07-06T19:38:37.511Z · LW(p) · GW(p)

Seems right. I still think it's funky that X_1 and X_2 are conditionally non-orthogonal even when the range of the variables is unbounded.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2021-07-06T23:45:22.571Z · LW(p) · GW(p)

Yeah, this is the point that orthogonality is a stronger notion than just all values being mutually compatible. Any x1 and x2 values are mutually compatible, but we don't call them orthogonal. This is similar to how we don't want to say that x1 and (the level sets of) x1+x2 are compatible. 

The coordinate system has a collection of surgeries, you can take a point and change the x1 value without changing the other values. When you condition on E, that surgery is no longer well defined. However the surgery of only changing the x4 value is still well defined, and the surgery of changing x1 x2 and x3 simultaneously is still well defined (provided you change them to something compatible with E).

We could define a surgery that says that when you increase x1, you decrease x2 by the same amount, but that is a new surgery that we invented, not one that comes from the original coordinate system.