Implementing CDT with optimal predictor systemspost by Vanessa Kosoy (vanessa-kosoy) · 2015-12-20T12:58:44.000Z · score: 1 (1 votes) · LW · GW · 2 comments
Preliminaries Construction 1 Proposition 1 Notation Results Theorem 1 Definition 1 Corollary 1 Definition 2 Construction 2 Corollary 2 Theorem 2 Corollary 3 Construction 3 Corollary 4 Definition 3 Theorem 3 Theorem 4 Appendix A Proposition A.1 Proof of Proposition A.1 Proof of Proposition 1 Proposition A.2 Proof of Proposition A.2 Lemma A.1 Appendix B Lemma B.1 Lemma B.2 Proof of Lemma B.2 Proof of Theorem 1 Proposition B.1 Proof of Proposition B.1 Proof of Corollary 1 Proof of Corollary 2 Proposition B.2 Proof of Proposition B.2 Proposition B.3 Proof of Proposition B.3 Proposition B.4 Proof of Proposition B.4 Proof of Theorem 2 Proof of Corollary 3 Proof of Corollary 4 Proposition B.5 Proof of Proposition B.5 Proof of Theorem 3 Proposition B.6 Proof of Proposition B.6 Proof of Theorem 4 2 comments
We consider transparent games between bounded CDT agents ("transparent" meaning each player has a model of the other players). The agents compute the expected utility of each possible action by executing an optimal predictor of a causal counterfactual, i.e. an optimal predictor for a function that evaluates the other players and computes the utility for the selected action. Since the agents simultaneously attempt to predict each other, the optimal predictors form an optimal predictor system for the reflective system comprised by the causal counterfactuals of all agents. We show that for strict maximizers, the resulting outcome is a bounded analogue of an approximate Nash equilibrium, i.e. a strategy which is an optimal response within certain resource constraints up to an asymptotically small error. For "thermalizers" (agents that choose an action with probability proportional to ), we get a similar result with expected utility replaced by "free utility" . Thus, such optimal predictor systems behave like bounded counterparts of reflective oracles.
The proofs for this section are given in Appendix A.
We redefine and to be somewhat smaller proto-error spaces which nevertheless yield the same existence theorems as before. This is thanks to Lemma A.1.
Given , denote the set of bounded functions s.t.
If is s.t. , is a proto-error space.
For any , is an ample proto-error space. When is non-decreasing, is stable.
is a stable ample proto-error space.
We denote . We allow the same abuse of notation for this symbol as for usual big notation.
For any -bischeme we use to denote .
For reflective systems we write indices in as superscripts rather than subscripts as before.
Given , we denote and .
The proofs for this section are given in Appendix B.
We start by showing that choosing an action by maximizing over optimally predicted utility is an optimal strategy in some sense.
Fix a proto-error space . Consider a finite set, a word ensemble and . Suppose are -optimal predictors for . Assume that and don't depend on (this doesn't limit generality since we can replace by and by a polynomial ). Consider an -valued -bischeme. Define another -valued -bischeme where is computed by sampling from and choosing arbitrarily from s.t. is maximal. Then, there is s.t.
Given suitable sets and , a -scheme of signature is s.t.
(i) The runtime of is bounded by for some polynomial .
(ii) for some polynomial .
(iii) is a probability measure on and there is s.t. .
A -scheme of signature is also called a -valued -scheme.
As usual, we sometimes use as implicit notation in which receives a value sampled from for the second argument and/or a value sampled from for the third argument.
Fix . Consider a finite set, a word ensemble and . Suppose are -optimal predictors for . Consider and an -valued -scheme. Define as in Theorem 1. Then
We now introduce the game theoretical setting.
A distributional game is a quadruple where is a word ensemble, is a finite set representing players, are finite sets representing the possible actions of each player and represent the utility functions.
For each , defines an ordinary game in normal form. We think of as a single game where the strategies are (some class of) functions and the payoffs are .
Consider a distributional game and . We construct the reflective system .
is defined by
Denote to be the space of mixed actions of player i.e. the space of probability measures on .
Finally, define by
For any -predictor we will use the notation to mean
For a family of -predictors, we will use the notations and to mean
Fix a distributional game, and an -optimal predictor system for . Consider , and an -valued scheme. Then
We now move from studying strict maximizers to studying thermalizers.
Fix a proto-error space . Consider a finite set, a word ensemble and . Suppose are -optimal predictors for and is some function. Assume that and don't depend on . Consider an -valued -bischeme. Suppose is another -valued -bischeme satisfying
Here is sampled from , is the normalization factor and . Then, there is s.t.
Here for stands the (base ) Shannon entropy of a probability measure on .
Fix . Consider a finite set, a word ensemble and . Suppose are -optimal predictors for and a bounded function. Consider and an -valued -scheme. Assume is as in Theorem 2. Then
Here stands for averaging the probability measure on with respect to parameter using probability distribution .
Consider a distributional game, and . We construct the reflective system .
Here is the normalization factor.
Finally, define by
We define the notations , and analogously to before.
Fix a distributional game, , bounded and an -optimal predictor system for . Consider , and an -valued scheme. Then
Finally, we show that assuming doesn't depend on and doesn't fall to zero too fast, deterministic advice is sufficient to construct an optimal predictor system for .
Consider a reflective system and . is said to be -Hoelder when there are , , , and s.t.
Consider a finite set , a reflective system and . If is -Hoelder then it has an -optimal predictor system.
Consider a distributional game, and . Assume for some . Then is -Hoelder.
Suppose is a non-decreasing function s.t. . Define by
Proof of Proposition A.1
Consider . We have
Proof of Proposition 1
Mostly obvious modulo Proposition A.1. To see is stable for non-decreasing , consider a non-constant polynomial and . Define . To get the desired condition for and with , consider any s.t. and for sufficiently large we have . Suche exists since for for sufficiently large . We have
Consider a polynomial . There is a function s.t.
(ii) For any function we have
Proof of Proposition A.2
Given functions s.t. for , the proposition for implies the proposition for by setting
Therefore, it is enough to prove to proposition for functions of the form for .
Consider any . We have
Since takes values in
The last two equations imply that
Raising to a power is equivalent to adding a constant to , therefore
Since we can choose satisfying condition (i) so that
It follows that
Consider a distributional estimation problem, , -predictors. Suppose a polynomial, and are s.t.
The proof of Lemma A.1 is analogous to before and we omit it.
The following is a slightly stronger version of one direction of the orthogonality lemma.
Consider a distributional estimation problem and an -optimal predictor for . Then there are and an -moderate function s.t. for any , a probability measure on and that runs in time on any valid input
The proof is analogous to the original and we omit it.
Fix a proto-error space . Consider a finite set, a word ensemble and . Suppose are -optimal predictors for . Assume that and don't depend on . Consider an -valued -bischeme. Assume factors as . Then
Proof of Lemma B.2
Applying Lemma B.1 we get the desired result.
Proof of Theorem 1
Lemma B.2 implies
Combining the two