Multibit reflective oracles
post by Benya_Fallenstein (Benja_Fallenstein) · 2015-01-25T02:23:00.000Z · LW · GW · 1 commentsContents
Theorem. None 1 comment
This post describes a new version of the reflective oracles I described in a previous forum post. This work extends probabilistic Turing machines with an oracle that answers certain kinds of queries about other probabilistic Turing machines with the same kind of oracle; straight-forward ways of doing this would lead to diagonalization problems, which this kind of oracle avoids by answering certain queries probabilistically. Applications include a variant of AIXI that is able to reason about universes containing other instances of the same variant of AIXI; I expect we'll have forum posts about this in the coming weeks.
The new version, which is based on this comment thread with Paul, is significantly simpler to use than the previous version: The previous version only gave useful answers about machines that halt with probability one, no matter how the oracle behaves. The new version basically just doesn't have an analogous requirement. In addition, the new version allows for queries about machines that output more than a single bit; it turns out that this makes it much simpler to define simplicity priors of the kind used in AIXI, doing away with most of the complexity I described in "Simplicity priors with reflective oracles".
This is still a fairly technical post, whose purpose is to help with figuring out the details of a nice version reflective oracles and of their applications. At some point in the future when all of this has stabilized more, I expect to write a more introductory post on these topics.
ETA: Thanks to Matt Elder, who helped me think through the details of this system! I wrote the original post pretty quickly and forgot to acknowledge this, mea culpa.
Let's write for the set of bits, for the set of finite bitstrings, and for the set of infinite streams of bits; further, let's write for the empty bitstring. Then is the set of probability distributions over single bits, which we can represent by a single number in , specifying the probability that the bit is ; that is, we can think of as the set . Moreover, is the set of probability distributions over infinite bitstreams, which we can represent by functions , where represents the probability that the infinite bitstream starts with the finite bitstring , such that (every bitstream starts with the empty bitstring), and , . In other words, we can think of as the set of all functions which satisfy these two conditions.
Let's also write for the set of probabilities that are rational numbers, and for the set of all Gödel numbers of probabilistic Turing machines that can make calls to a probabilistic oracle (i.e., to an oracle that may answer some queries probabilistically). We will consider machines with advance-only binary output tapes---that is, the machine has instructions to write a or a on its output tape, and no way of going back and deleting something it has written; thus, a machine's output is either an element of (if the machine halts at some point, or runs forever but stops producing output after a finite amount of time), or an element of (if the machine keeps producing output).
A query to the oracle is a triple , roughly interpreted as asking whether the probability that the output of the oracle machine starts with is greater than . The output of the oracle is a single bit, which, on some queries, may be chosen randomly: If definitely produces at least bits of output, and is greater than the probability that the output starts with , the oracle definitely outputs ; if is smaller, it definitely outputs ; if it is equal, the oracle's output may be random, with the probability of outputting anywhere in . (This is crucial for getting around diagonalization.) If isn't guaranteed to produce at least bits of output, then see below.
In the new formulation, an oracle is a pair of functions, . The first of these, , specifies the behavior of the oracle: is the probability that the oracle outputs when called on the query .
The second function has to do with what happens when we query the oracle about machines which aren't guaranteed to produce bits. In an intuitive sense, the idea is that we pretend that every machine always produces an infinite bitstream, even if in reality, the machine goes into an infinite loop after producing only a finite output; in this case, we choose some arbitrary probability distribution over the "output" we pretend the machine produces after this point. The function specifies the resulting probability distribution over infinite bitstreams; that is, is the distribution that the oracle pretends the machine produces. Since we represent elements of as functions from to , this means that is what our oracle pretends is the probability that 's output starts with . In particular, if , then invoking the oracle on will definitely output , i.e., then ; and if , then .
The intuition for is that it should only "fill in" probabilities when actually running stops generating any output; as long as keeps generating output, the distribution should reflect this. We can formalize this by the condition that for any bitstring , the probability should be greater or equal to the probability that outputs given that calls to the oracle behave according to .
At the end of this post, I'll show that a pair satisfying these conditions does in fact exist.
First, though, let me give you an indication about why it seems useful to move from single-bit to multi-bit oracles. I should say, first of all, that they aren't actually more powerful than single-bit oracles---Marcello has shown that you can construct a multi-bit oracle given an oracle that works as described above, but allows only for single-bit queries---but it's not as easy as you might think; Marcello's construction uses three different tricks which you'd need to wrap your head around first. It seems easier to define the multi-bit oracle directly.
The place where multi-bit oracles are useful is when you want to define a simplicity prior, as in Solomonoff induction. In the reflective oracle setting, the natural way of doing so is to define a universal prior over bitstrings as an oracle machine that samples the Gödel number of another machine with probability proportional to , and then just runs . We'd then like to use our oracle to answer queries about the resulting distribution over bitstrings. However, like in the case of Solomonoff induction, we will need to deal with the fact that not all machines produce an infinite amount of output. It would be nice if we could just check whether a given eventually loops, and don't run it if it does, but we won't be able to define an oracle that lets us do so (for machines with access to the same oracle), because of the halting problem: otherwise, we could write an that asks the oracle whether it goes into an infinite loop, and does so iff the oracle says it doesn't.
I've previously described a bag of tricks for solving this problem in the context of my original definition of reflective oracles, but with the definition given above, the problem becomes trivial: We just define our simplicity prior as a machine that samples an arbitrary and runs it, and then use our oracle on to give us a corresponding distribution over infinite bitstreams. If the we sample fails to output an infinite stream of bits, our oracle will fill in the probabilities arbitrarily, so we get a consistent distribution over infinite bitstreams. And we still have the essential property we expect of a simplicity prior: For any machine , there is a constant such that is lower-bounded by times the probability that outputs .
Before giving the existence proof, I'll give a more formal definition of the conditions we place on and .
In order to state these conditions, it will be helpful to assume that machines are specified in a way which includes the current state of their working tapes: Thus, we can run a machine for a single step and describe its next state as a new machine . (In other words, elements of are really machine states.) Then, we can describe every machine by its behavior when running it for a single timestep: Either it executes a single deterministic computation, leading a new machine state , which we write as ; or it outputs a single bit and transitions to state , which we write as ; or it queries the oracle on some triple , and transitions into state or depending on the answer, which we write as ; or it flips a fair coin and transitions to or depending on the result, which we write as . In a slight abuse of notation, we'll write things like , which is really supposed to mean " for any state in the set described by ".
An oracle is a pair , where and . We now define a notion of an oracle "reflecting" another oracle ; an oracle is reflective if it reflects itself. An oracle reflects an oracle if the following conditions are satisfied:
- If , then .
- If , then .
- .
- .
- .
- .
- .
The first two of these conditions just write out in symbols what I earlier said in words. The latter five conditions imply the demand that must be lower-bounded by the probability that returns if oracle calls behave according to , but this isn't quite as obvious. The proof is to show by induction on that for all and , is the probability that outputs within the next timesteps. I'll omit the details here.
As in my previous posts about versions of this system, I'll actually prove a slightly stronger result, which I expect will be helpful in showing a relation between reflective oracles and Nash equilibria. This makes use of the notion of a partial oracle, which is a pair of partial functions , and , which specify the values that and should take on certain inputs. An oracle extends a partial oracle if and whenever the right-hand side of these equations is defined. We call a partial oracle reflective if for every extending , there is an , also extending , which reflects .
Theorem.
i. There is a reflective oracle . ii. For every reflective partial oracle , there is a reflective oracle extending .
Proof. We begin by showing that the empty partial oracle (i.e., the partial oracle satisfying and ) is reflective, since (i) then follows from (ii). Thus, let be any oracle (since every oracle extends ), and define as follows: If , let ; else, let . Define by the equations in the definition of " reflects ". Then, clearly both reflects and (trivially) extends . This finishes the proof that is a reflective partial oracle.
It remains to show (ii). For this, we use the infinite-dimensional generalization of Kakutani’s fixed point theorem:
Suppose that is a non-empty, convex and compact subset of a locally convex topological vector space. Suppose further that is a set-valued function such that is non-empty, convex and compact for all , and such that the graph of , is a closed set. Then has a fixed point; that is, there is an such that .
Let be the set of all oracles. We can see this as a subset of , which in turn is a subset of , which is a locally convex topological vector space when endowed with the product topology (since this is true of any power of ). consists of all elements of such that satisfies the additivity properties we demand of elements of , namely and . Clearly, is nonempty, convex, and closed, and it is a subset of a product of , which, by Tychonoff’s theorem, is compact. Moreover, both and are countable, so it's sufficient to consider convergence of sequences in .
We choose to consist of all oracles that reflect and extend . By the assumption that is reflective, is non-empty for every . Moreover, it is easy to see that is both closed and convex; hence, if we can also show that has closed graph, then by the fixed point theorem, there is a such that , which is exactly what we want to show.
Thus, assume that we have sequences and of oracles extending such that for every , and suppose that these sequences converge to limits ; then we need to show that , i.e., that reflects . For the five equalities about in the definition of " reflects ", we simply take the limit of both sides of the equation, using the fact that convergence in the product topology is pointwise.
This leaves the two conditions about . Since the proof of the second condition is exactly symmetrical, consider without loss of generality the first of these conditions, i.e., "If , then ".
Thus, suppose that indeed . Since and convergence is pointwise, there must be an such that for all . Since reflects for all , it follows that for all , whence by pointwise convergence , concluding the proof. □
1 comments
Comments sorted by top scores.
comment by Cole Wyeth (Amyr) · 2024-08-26T22:31:40.248Z · LW(p) · GW(p)
This is a nitpick, but technically to apply the closed graph property instead of upper semicontinuity in Kakutani's fixed point theorem, I believe you need to know that your LCTVS is metrizable. It is sufficient to show it has a countable local base by Theorem 9 of Rudin's functional analysis, which is true because you've taken a countable power of R, but would not necessarily hold for any power of R (?).