Traversing a Cognition Space
post by Rafael Harth (sil-ver) · 2020-12-07T18:32:21.070Z · LW · GW · 5 commentsContents
1. Debate Trees 2. The Ideal Debate-FCH 3. Interpretation 4. Relaxing the truth assumption 5. The HCH-FCH 6. Conclusion None 5 comments
(This post is part of a sequence that's meant to be read in order; see the preface [LW · GW].)
Post #1 [LW · GW] was about developing and justifying a formalism for Factored Cognition. Now that we have this formalism, this post is about doing as much with it as possible.
1. Debate Trees
Recall [LW · GW] that a Cognition Space is a pair where is a set of statements, is a difficulty function, and is a human.
So far, I've only shown examples of single transcripts. A single transcript corresponds to one path through that is dependent on choices from both agents: at every step, the first agent outputs an explanation (which is a sequence of statements), the second agent points at one element of this sequence, and the first agent continues the path from that element onward. However, given that we model Ideal Debate agents as maximally powerful, it is also coherent to ask about the object that results if we fix all of the first agent's actions in advance, such that she 'pre-commits' for any possible combination of choices from the second agent.
I call such an object a Debate Tree, and we can define it formally as a directed rooted tree[1] , where (so each node is a sequence of statements) and , that satisfies the following three conditions:
- The unique root of is a one-element 'sequence' .
- [2] .
- .
The first condition says that the root of the tree needs to be a single statement (this should be the answer to the input question). The second condition says that every edge needs to explain a statement in ; we don't have redundant edges in our tree. And the third condition says that each statement is only explained once; the formal way of saying this is that, if two explanations exist for the same statement, they're really the same. (Note that this restriction exists because the first agent has to choose one explanation during the game; it certainly doesn't imply that only one explanation exists.) There is no condition to demand that each statement needs to have an explanation – whenever there is none, it means that the first agent decides to end the debate when that statement is pointed at.[3]
In this definition, nodes are explanations, and each node has one outgoing edge for each [statement of the explanation that the first agent wants to explain further], which links to an explanation for that statement. Defining the tree over individual statements would lead to a functionally equivalent definition, but I think defining it as-is makes arguments simpler.
Recall that a Debate Tree encodes all decisions that the first agent makes (for any possible combination of choices from the second agent). This means that, once we fix this object, the second agent is free to choose any path she wants out of the tree, without further involvement from the first agent. Formally, a path through a Debate tree with root is a pair[4]
where and . (And .) (You can go back to post 1 to convince yourself that a path through a Debate tree is also a path through a Cognition Space.)
As mentioned in the first post, we define the difficulty of a path as the difficulty of the final statement (since that is the one being judged). In symbols, .
2. The Ideal Debate-FCH
So far, we haven't talked about how precisely the two agents make their decisions. Given the concepts of Debate Trees and paths, this is now easy. The second agent wants the first agent to lose, which means she'll choose the most difficult path in whatever Debate Tree the first agent chooses. (We still assume that the first agent outputs only true statements, which means that the second agent can't win if the judge successfully verifies the final statement.) The first agent, knowing this, chooses the Debate tree such that the difficulty of the hardest path is minimized.
With this, we are almost ready to define a FCH for Ideal Debate. But first, we need a bit more notation:
- Given a question , we (for now) assume there is a unique statement that correctly answers the question.
- Given a cognition space , we write for the set of all Debate trees that begin with statement .
- Given a Debate tree , we write for the set of all paths in .
Given this, the difficulty of the path we will end up with is
Since the absolute values of the difficulty function are arbitrary (it doesn't matter whether we denote difficulties from 0 to 100 or from 0 to ), we can assume without loss of generality that the hardest difficulty a human can handle is 1.[5] Thus, we can formulate:
where is a human and a set of questions.
Going forward, it will also be useful to talk about the difficulty of Debate Trees. Thus, we define
and we say that a Debate tree can handle question if .[6]
Here is a different view of the problem. In the space of all statements, there is a subset that the human judge can verify directly, i.e.,
Then, there is a larger subset that contains all of the above plus the statements that can be explained solely in terms of statements in , i.e.,
In general, given any , we can extend by adding all statements that can be explained solely in terms of statements in , i.e.,
Note that this gives us a chain of expanding sets, i.e.,
We can also define the set of all statements that are eventually explainable in this way, i.e.,
Intuitively, it seems like Ideal Debate should be able to handle all questions with answers in , since they can be explained in terms of progressively easier statements – and then any path should eventually bottom out at , which means that the second agent cannot delay success indefinitely.
This brings us to our first (and as of now, only) theorem:
Theorem. Given any and , Ideal Debate-FCH.
Proof. First, note that, while the Ideal Debate-FCH is formulated as a hypothesis, the definition also defines a set, namely
and the Ideal-Debate FCH simply says that . It thus suffices to show that . We will prove an even stronger statement. Note that we can restrict the set by limiting the maximum depth of the Debate Trees that can handle the respective statements. Formally, we can define
for any , where denotes the set of Debate Trees with depth at most . ('Depth' is defined as the number of edges in the longest path through the tree.) By construction, we now have , just as . What we will show is that
We proceed by induction. First, if , then , which means that the trivial tree is a Debate Tree of depth 0 that can handle , so that . Conversely, if , then the Debate tree handling must have no edges (otherwise, its depth would be at least 1). Thus, is a path in this tree, and we have , hence .
Now, suppose the statement is true for some . We show that .
"": Let . Then, there exists such that . Apply the Inductive Hypothesis to find Debate Trees of depth at most such that tree handles statement . We combine these trees into a larger tree with root and an additional edge .[7] Since all have depth at most , this tree has depth at most . Furthermore, given any path through , by construction, the path must end in a node that exists in one of the , which implies that . It follows that handles and hence .
"": Let . Then, there exists a Debate Tree of depth at most that handles . Let be the unique[8] edge descending from the root. For each , let be the subtree growing out of .[9] By construction, has depth at most and handles , so (applying the Inductive hypothesis), we have . Then, and , and hence .
3. Interpretation
At this point, we have a bunch more definitions and a theorem. Now, what does this mean?
Let's start with Debate Trees. A Debate Tree is actually a very natural object; it's what you get if you explain a subject in a hierarchical rather than a linear way. ( is true because of ; then is true because [...].) It is very similar to Elizabeth's project of breaking questions down [LW · GW]. Notably, that project never mentions Factored Cognition; it's just presented as an epistemic tool.
In a better world, would textbooks use something like Debate Trees to explain proofs? I'm almost certain the answer is yes. There is no way that a purely sequential presentation of information is optimal. Our understanding doesn't work that way (compare post #-2 [LW · GW]).
There is one difference between Debate Trees and a hierarchical presentation of information optimized for being easily understandable. In the former, only one part is actually explored, which means that a Debate Tree doesn't mind having redundancy in it (by explaining stuff in more than one place). Conversely, if you optimize for understandability with respect to a single reader, you'll want to avoid redundancy. Nonetheless, they are very similar.
So much for Debate Trees. What about the theorem we've just proved? What does it mean?
Essentially, it means that Debate is nicely behaved in the limit. As both agents become stronger and the structure of the game becomes stricter, we approach a situation where the scheme can answer a question if and only if its answer can be recursively explained until there are no more difficult components. Even though the game results from two powerful agents applying optimization in opposite directions, the result is can be described without mentioning either one of them. Note that the same is not true for Iterated Amplification; even in the limit of perfect Factored Cognition, it is entirely possible that the scheme fails at a question for which an easy explanation exists.
Notably, the theorem stops being true if we drop the assumption that both agents are maximally powerful.[10] If the first agent is weaker, she might fail to find the best explanation, which shrinks the set of statements Debate can handle. Conversely, if the second agent is weaker, she may fail to point to the most problematic statement, which enlarges the set of statements Debate can handle. Do these factors equal out? I'm not sure. One of the things that I haven't yet tried but may be reasonable is to model inadequacy and see whether this benefits the first or second agent.
It's worth pointing out that the Ideal Debate FCH doesn't talk about false statements. It formalizes the claim 'the first agent can always win by being honest' which leaves open the possibility that she can also win by being dishonest. (And if she could do both, she would presumably do what's easier or safer.) It is necessary but perhaps not sufficient to realize Factored Cognition with Debate.
I think focusing on the honest case makes the most sense. Nonetheless, the next chapter is about what happens if the first agent wants to defend a lie.
4. Relaxing the truth assumption
To model dishonesty by the first agent, we assume that
- In addition to , there is a set of false statements. We have .[11]
- Given a question , in addition to the honest answer , there is at least one dishonest answer . The first agent may defend this dishonest answer.
- Any explanation for a false statement needs to contain at least one false element, i.e., if and , then such that .
Since the second agent is also maximally powerful, she can repeatedly point to a false statement in the explanation. Due to the third assumption made above, this becomes an invariant: the ability to point to a false statement in any one step leads to the ability to point to a false statement in the next step, and therefore, since she can point to one in the beginning, she can do so at every step in the game.
In particular, this means that, regardless of when the first agent ends the game, we will have that .
At this point, we have two conflicting intuitions. On the one hand, the judge should only declare the first agent the winner if she is certain that the statement to be verified is correct. This should only happen for simple statements; if is complicated, she should declare the second agent the winner. Thus, the first agent should only be able to win the game by choosing a that looks simple and obviously true, yet is false. This sounds extremely hard to accomplish.
On the other hand, we are, in some sense, pitching a human in a battle of wits against a potentially superintelligent AI, which sure sounds like doom. The superintelligent AI has to follow a very rigid structure, but... it still kinda sounds like doom. In particular, we probably expect that the first agent will navigate the game so cleverly that she only appears to follow the rigid structure while really doing something else. Crucially, we lack the ability of the second agent to interfere through an equally powerful I/O channel that exists in regular Debate.
There thus seems to be a tradeoff here, where at the one end, we have the 'stock Debate' protocol that enforces no structure, and on the other, the 'Ideal Debate' protocol that enforces extremely rigid structure. More structure means more concern with things like wireheading, but also stronger theoretical guarantees.
I strongly suspect that running literal Ideal Debate is a recipe for disaster since the first agent can break away from the format entirely and attempt to wirehead the judge. At the very least, it would require a way for the judge to prematurely declare a winner (in literal Ideal Debate, the first agent has complete control over when to end the game), and it would probably also require the second agent to claim that wireheading is occurring. On the other hand, I do have the intuition that one should aim to get as close to Ideal Debate as possible while managing these concerns.
5. The HCH-FCH
Ideal Debate has been amendable to a formal description because the human only makes decisions at the end. In HCH, the human is involved all throughout and, crucially, is in charge of decomposing questions. To have an accurate model, one needs to abstract away the entire process of the human decomposing the question as well as any other cognitive work she might do on the question – but if this is done, there is nothing left to formally capture.
(And older version of this post used to have an attempt of formalizing it more, but I've since concluded that it can't be done right.)
To have something analogous to the Ideal Debate-FCH, we trivially define:
6. Conclusion
This post concludes the first part of the sequence. There probably weren't any huge surprises so far. Is having a formalism useful?
I think so. One purpose of formalizing a setup is that it overcomes the illusion of transparency [LW · GW]. Without it, it is possible to think the setup is clear even when it really is not. I can take myself as one data point: my first attempt at coming up with a formalism looked different (and, I think, wrong). At least my past self did not understand the problem to the point that the formalism is trivial.
Anyway, at this point in the sequence, I want to defend the following two claims:
- It doesn't make any sense to talk about the 'Factored Cognition Hypothesis'; there is no one requirement that works for both schemes.
- The formalism of Cognition Spaces is accurate, in that it doesn't include anything that misrepresents Factored Cognition as implemented by IDA or Debate. It may be incomplete (e.g., there is nothing about defining terms, as people have pointed out on post #1, and maybe you could do more with false statements).
I think these are pretty conservative claims, even though the first clearly contradicts what I've heard other people say. If anyone doubts them, this is the place to discuss it. My conclusions from the second part are probably going to be a lot more controversial.
A directed rooted tree, also called an arborescence, is a directed acyclic graph such that there exists [a node from which there is exactly one path to every other node]. This node is called the root; it's unique because, if there were two such nodes and , there would be a path and a path and hence a cycle . ↩︎
We continue writing to denote that appears in the sequence . ↩︎
The 'trivial tree' for some is a proper Debate Tree, according to this definition. It corresponds to the first Debate agent giving an answer to the input question and deciding that this answer is already self-evident. ↩︎
Note that the first element of this pair is a path as defined in Graph theory (through the Debate Tree, which is a graph) whereas the second is an additional element that denotes which statement in final explanation we end up in. ↩︎
In particular, one could model the same by having a difficulty threshold for a human, such that the human can deal with all questions that are at most hard. However, the pair is equivalent to the pair , where is like except that all difficulties are scaled by . ↩︎
Given this, one could alternatively phrase the Ideal Debate FCH as
Or in words, one could say, 'every question in can be handled by a Debate Tree'. ↩︎
This is a step where defining the nodes of Debate Trees to be explanations rather than individual statements makes things harder. For each subtree, one needs to replace its current root (that's a one-element sequence) with the node . This can be done formally in terms of the underlying nodes and edge sets; it's just cumbersome. ↩︎
The edge from the root is unique because the root is a one-element sequence , and each statement can only have one explanation (and thus only one edge) by the third condition of Debate Trees. This corresponds to the fact that the first agent only has to prepare one explanation for her initial answer; there is not yet a choice from the second agent involved. ↩︎
This step is the reverse of the combining step. The subtree growing out of is the tree we get by taking the sub-graph out of and replacing its root with just . ↩︎
Recall that there is, in fact, only one agent playing against itself. Thus, we can assume that both agents are always equally competent. ↩︎
The 'squared cup' symbol means the same as plus the information that the two sets are disjoint. ↩︎
5 comments
Comments sorted by top scores.
comment by William_S · 2020-12-17T21:31:08.367Z · LW(p) · GW(p)
It seems that in principle a version of debate where only one agent makes statements and the other chooses which statements to expand could work, but it seems like it requires the judge to be very strict that the statement is 100% true. It seems hard to apply this kind of system to statements outside of formal mathematics.
Systems where both agents can make statements seem like they might be less vulnerable to judges accepting statements that aren't 100% true. For one example, if both agents take turns being the arguer, then if both agents submit a path that is judged to be correct, you can stipulate that the agent with the shortest path wins (like imposing a simplicity prior).
comment by Rafael Harth (sil-ver) · 2020-12-07T18:49:31.355Z · LW(p) · GW(p)
Somewhat tangential to the sequence itself, I'm pretty interested in the idea of using non-linear structure to explain stuff, especially in math. Section titles and footnotes can function like that, but to a really limited degree. I think Arbital has somewhat of a hierarchical structure? But I've never seen it taken far.
Does anyone have strong feelings about this?
Replies from: steve2152, William_S↑ comment by Steven Byrnes (steve2152) · 2020-12-10T14:46:50.975Z · LW(p) · GW(p)
Isn't wikipedia nonlinear? It says "A because of B and C and D". Then if you're confused about C, you click the C link and read the C article. Right?
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2020-12-10T15:55:34.337Z · LW(p) · GW(p)
Yeah, I guess it's hierarchical on the level of articles, although each article is linear.
And I would argue the hierarchical structure is a big asset.
↑ comment by William_S · 2020-12-17T21:17:57.392Z · LW(p) · GW(p)
https://www.kialo.com/ lets people build debates on controversial topics in a heirarchical structure (more like stock debate, with both sides providing arguments), but doesn't seem to have been used for explanations/arguments. I'd also be pretty interested to see more attempts at heirarchical explanations.