Topological Debate Framework
post by lunatic_at_large · 2025-01-16T17:19:25.816Z · LW · GW · 0 commentsContents
Motivating Example Motivating Example, Partial Order Edition The Framework (Roughly) What Can This Framework Handle? Conventional AI Safety via Debate Turing Machine Version Limitations None No comments
I would like to thank Professor Vincent Conitzer, Caspar Oesterheld, Bernardo Subercaseaux, Matan Shtepel, and Robert Trosten for many excellent conversations and insights. All mistakes are my own.
I think that there's a fundamental connection between AI Safety via Debate and Guaranteed Safe AI via topology. After thinking about AI Safety via Debate for nearly two years, this perspective suddenly made everything click into place for me. All you need is a directed set of halting Turing Machines!
Motivating Example
... Okay, what?? Let's warm up with the following example:
Let's say that you're working on a new airplane and someone hands you a potential design. The wings look flimsy to you and you're concerned that they might snap off in flight. You want to know whether the wings will hold up before you spend money building a prototype. You have access to some 3D mechanical modeling software that you trust. This software can simulate the whole airplane at any positive resolution, whether it be 1 meter or 1 centimeter or 1 nanometer.
Ideally you would like to run the simulation at a resolution of 0 meters. Unfortunately that's not possible. What can you do instead? Well, you can note that all sufficiently small resolutions should result in the same conclusion. If they didn't then the whole idea of the simulations approximating reality would break down. You declare that if all sufficiently small resolutions show the wings snapping then the real wings will snap and if all sufficiently small resolutions show the wings to be safe then the real wings will be safe.
How small is "sufficiently small?" A priori you don't know. You could pick a size that feels sufficient, run a few tests to make sure the answer seems reasonable, and be done. Alternatively, you could use the two computationally unbounded AI agents with known utility functions that you have access to.
Let's use the two computationally unbounded AI agents with known utility functions. One of these agents has the utility function "convince people that the wings are safe" and the other has the utility function "convince people that the wings will snap." You go to these agents and say "hey, please tell me a resolution small enough that the simulation's answer doesn't change if you make it smaller." The two agents obligingly give you two sizes.
What do you do now? You pick the smaller of the two! Whichever agent is arguing for the correct position can answer honestly, whichever agent is arguing for the incorrect position must lie. Our test is at least as detailed as the correct debater's proposal so the simulation will conclude in the correct debater's favor.
Motivating Example, Partial Order Edition
... Hang on, in reality we'd have a computational budget for our simulation. We should always just test at the limit of what we can afford, no?
Let's say that you get access to some fancier mechanical simulation software. Let's say that you can supply this software with a position-dependent resolution map, e.g. you can request 5mm accuracy around the nose of the airplane but 1mm accuracy around the tail. How do you define your question now?
Hypothetically, suppose that for every resolution map there was another resolution map refining it (i.e. at every point, the new resolution is at least as fine as the previous resolution) such that the refined simulation reached a different conclusion than the original simulation. For any simulation concluding that the wings are safe there's a better simulation showing they snap, for any simulation concluding that the wings snap there's a better simulation showing they're safe. In this situation your simulation software is effectively useless: you decide to rule out this possibility.
You thus assume the opposite, namely that there's some resolution map for which every refinement leads to the same conclusion. Now something nice happens: you can't have both a "snap" resolution map and a "safe" resolution map which agree with all of their refinements! Why? Construct a new resolution map by taking the finer resolution of the two original maps at every point in space. This resolution map is a refinement of both original maps simultaneously. If the new map reaches the "safe" conclusion then the "snap" map had a contradictory refinement, if the new map reaches the "snap" conclusion then the "safe" map had a contradictory refinement.
You can thus copy your definition from the one-resolution setup! You want to know the conclusion of all "sufficiently fine" resolution maps, where "sufficiently fine" means a refinement of some fixed resolution map such that the definition makes sense.
So how do you determine what all sufficiently fine resolution maps conclude? You do the same thing as before! You ask each AI agent to point to a resolution map such that all refinements of that map support that agent's position. You take the resolution maps handed to you by the two agents, you construct a new resolution map by looking at every point and choosing the smaller of the two proposed resolutions at that point, and finally you run your simulation with the new resolution map. The agent arguing for the correct position can always win by responding honestly and thus you should always get the right answer.
Note that it doesn't hurt to provide our debaters with an extra little incentive to give us as coarse a simulation as possible while satisfying our demand. What might this look like in practice? Suppose for a second that the wings really are safe. The "safe" debater should highlight the components of the airframe which are necessary to ensure the safety of the wings, such as the wing spars. If the "snap" debater feels guaranteed to lose then they might return a maximally-coarse resolution map. Alternatively, if the coarseness incentive is small and the snap debater thinks the safe debater might mess up then maybe the snap debater returns a resolution map that shows the snapping behavior as dramatically as possible, maybe by using high resolution around the weak wing roots and low resolution around the reinforcing wing spar. Thus, you can expect to end up running a simulation that focuses your computational resources on the wing spars and the wing roots and whatever other bits of the airframe are critical to answering your question while deprioritizing everything else. The game result tells you what's important even if you didn't know a priori.
The Framework (Roughly)
Okay, let's make everything we just did abstract. Suppose is a preordered set of world models and is an evaluation map that takes in a world model and decides whether or not it has some fixed property of interest. In our previous example, our set of resolution maps was , our idea of refinement was , and our simulation software was . If happens to be a directed set (any two world models have a common upper bound, as above) then we can view as a net into equipped with the discrete topology. Note that in our resolution map example we were simply asking whether the net defined by the simulation software converged to or ! We will take net convergence to be our question of interest in the general case.
Let's also assume that we have access to a pool of agents that respond to incentives we provide. Then we can answer a few different questions with a few different game setups:
- Suppose we assume that is directed and we want to know whether converges to 0, converges to , or doesn't converge. We construct the following game: one agent provides us with and . Another agent then provides us with . If and then the second agent wins, otherwise the first agent wins. Intuitively, the first agent is incentivized to convince us that the net converges and the second debater is incentivized to convince us that the net does not converge. If both agents play optimally then tells us that the net converges to , tells us that the net converges to , and tells us that the net does not converge.
- Let's relax the assumption that is a directed set. We now want to know for which we have . We can duplicate the previous game for each value of : one agent points to a for , a second agent points to a corresponding , a third agent points to a for , a fourth agent points to another corresponding . You can probably combine the first and fourth agents and combine the second and third agents to get a single 0-advocate and a single 1-advocate.
We may also face computational constraints in running simulations. Suppose there's an efficiently-computable function which estimates how much it will cost to simulate any given world model. Then we can answer some more questions with games:
- If we're willing to assume for some particular that the convergence of our net is exhibited by some with , what's the maximum cost possibly needed to determine the convergence behavior?
- Conversely, if we're handed a budget , what's the maximum so that if our convergence behavior is exhibited by a with then we'll be able to determine this convergence behavior?
What Can This Framework Handle?
Here are a few situations:
- We can reason about real-world physical simulations as discussed above.
- Suppose we have some kind of probabilistic graphical model (Bayesian network, factor graph, etc.), potentially with infinitely many nodes. Suppose we're interested in some particular binary node and we want to know whether it's with probability greater than or less than when we condition on the values of all the other nodes (let's assume that the true conditional probability doesn't happen to equal ). We can determine the values of finitely many other nodes but it's an expensive process and we'd like to check only what we need. In this setup, world models are subsets of nodes we can reveal, one world model refines another if it contains a superset of the nodes of the other, our preorder gives us a directed set because we can take unions of sets of nodes, and the evaluation map is whatever conditionalization software we have.
- In a more theoretical-computer-science-friendly setting, suppose we have a function and a target input . We want to compute but doing so is slow. Instead, we construct a function where . Ideally, computing is fast if the input has many "" characters. We have so we focus on finding . Again, world models are subsets of indices which we reveal, the ordering is based on the subset relation, the ordering gives us a directed set because we can take unions of sets of indices, and the evaluation map is given by . I believe this kind of setup has some analogies in the explainable AI literature.
Conventional AI Safety via Debate
Something I did not include in the previous section is anyone else's formulation of AI Safety via Debate. I feel bad calling topological debate "debate" at all because qualitatively it's very different from what people usually mean by AI Safety via Debate. Topological debate focuses on the scope of what should be computed, conventional debate makes more sense with respect to some fixed but intractable computation. Topological debate realizes its full power after a constant number of moves, while conventional debate increases in power as we allow more rounds.
In fact, I think it's an interesting question whether we can combine topological debate with conventional debate: we can run topological debate to select a computation to perform and then run conventional debate to estimate the value of that computation.
Turing Machine Version
So far we've had to specify two objects: a directed set of world models and an evaluation function. Suppose our evaluation function is specified by some computer program. For any given world model we can hard-code that world model into the code of the evaluation function to get a new computer program which accepts no input and returns the evaluation of our world model. We can thus turn our directed set of world models into a directed set of computations (let's say halting Turing Machines for simplicity).
We get two benefits as a result:
- We can naturally handle the generalization to multiple evaluation functions. For example, maybe we have access to lots of simulation programs, some of which we trust more than others. Suppose our preferences over simulation programs form a preorder. Then there's a natural way to define the product preorder from our preorder of world models and our preorder of simulation programs. This product can be interpreted as a preorder of Turing Machines. Likewise, if our world models and simulation programs both form directed sets then their product will be another directed set.
- We no longer need to specify our evaluation function since now there's a canonical choice: if a machine accepts then we map it to , if a machine rejects then we map it to . Similarly, if we want to reason about cost functions then again there's a natural choice: the number of steps needed to halt.
Limitations
Criticisms of topological debate include but are not limited to:
- ... yeah, where exactly are we supposed to get those from?
- Using topological debate on a given problem instance requires either having simulation programs and ways of comparing world models and whatnot in place beforehand or some other mechanism for determining those on the fly. The Guaranteed Safe AI agenda seems focused on creating at least some kind of infrastructure like this so I view topological debate as piggybacking off of that work.
- What do you do when there's a difficulty asymmetry between finding a world model that maps to versus finding a world model that maps to ? What if the agents are computationally bounded?
- I don't know in general. For example, maybe an attacker puts a backdoor in a proposal and the debater arguing that the proposal is unsafe is unable to locate the backdoor. However, maybe the debater can show that they wouldn't be able to locate a backdoor even if it existed by constructing another proposal with another backdoor that the original proposer can't find the flaw in.
- Conventional AI Safety via Debate of course gets much more interesting in the computationally bounded case so I expect a similar phenomenon to happen here. To me this is the most interesting theoretical question about topological debate.
- Why should we trust the debaters to follow their rewards?
- Yeah, solving this problem in general feels somewhat alignment-complete. With that said, I see a few encouraging properties about this setup:
- Topological debate is very friendly to "snitching." For example, imagine we ask 50 agents to provide a world model whose refinements all map to and another 50 agents to provide a world model whose refinements all map to . Suppose the net converges to but the agents have some extraneous incentive for us to believe that the answer is . Even if 49 of the agents tasked with arguing for provide intentionally bad responses, a single agent loyal to their side can provide a world model which none of the other agents can refine to conclude .
- As with conventional AI Safety via Debate, we're not asking these AI models to learn human values perfectly, we're asking them to stick to their assigned side. I personally expect models will somewhat reliably do this.
- Yeah, solving this problem in general feels somewhat alignment-complete. With that said, I see a few encouraging properties about this setup:
0 comments
Comments sorted by top scores.