# A utility-maximizing varient of AIXI

post by AlexMennen · 2012-12-17T03:48:05.173Z · score: 15 (18 votes) · LW · GW · Legacy · 20 comments

## Contents

  Background
Problem
Solution
Extension to infinite lifetimes
Proof of criterion for supremum-chasing working
None


Response to: Universal agents and utility functions

Related approaches: Hibbard (2012)Hay (2005)

## Background

Here is the function implemented by finite-lifetime AI$\xi$:

${\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\sum_{x_{k}\in X}\max_{y_{k+1}\in Y}\sum_{x_{k+1}\in X}...\max_{y_{m}\in Y}\sum_{x_{m}\in X}\left(r\left(x_{k}\right)+...+r\left(x_{m}\right)\right)\cdot\xi\left(\dot{y}\dot{x}_{,

where $m$ is the number of steps in the lifetime of the agent, $k$ is the current step being computed, $X$ is the set of possible observations, $Y$ is the set of possible actions, $r$ is a function that extracts a reward value from an observation, a dot over a variable represents that its value is known to be the true value of the action or observation it represents, underlines represent that the variable is an input to a probability distribution, and $\xi$ is a function that returns the probability of a sequence of observations, given a certain known history and sequence of actions, and starting from the Solomonoff prior. More formally,

${\displaystyle \xi\left(\dot{y}\dot{x}_{,

where $Q$ is the set of all programs, $\ell$ is a function that returns the length of a program in bits, and a program applied to a sequence of actions returns the resulting sequence of observations. Notice that the denominator is a constant, depending only on the already known $\dot{y}\dot{x}_{, and multiplying by a positive constant does not change the argmax, so we can pretend that the denominator doesn't exist. If $q$ is a valid program, then any longer program with $q$ as a prefix is not a valid program, so ${\displaystyle \sum_{q\in Q}2^{-\ell\left(q\right)}\leq1}$.

## Problem

A problem with this is that it can only optimize over the input it receives, not over aspects of the external world that it cannot observe. Given the chance, AI$\xi$ would hack its input channel so that it would only observe good things, instead of trying to make good things happen (in other words, it would wirehead itself). Anja specified a variant of AI$\xi$ in which she replaced the sum of rewards with a single utility value and made the domain of the utility function be the entire sequence of actions and observations instead of a single observation, like so:

${\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\sum_{x_{k}\in X}\max_{y_{k+1}\in Y}\sum_{x_{k+1}\in X}...\max_{y_{m}\in Y}\sum_{x_{m}\in X}U\left(\dot{y}\dot{x}_{.

This doesn't really solve the problem, because the utility function still only takes what the agent can see, rather than what is actually going on outside the agent. The situation is a little better because the utility function also takes into account the agent's actions, so it could punish actions that look like the agent is trying to wirehead itself, but if there was a flaw in the instructions not to wirehead, the agent would exploit it, so the incentive not to wirehead would have to be perfect, and this formulation is not very enlightening about how to do that.

[Edit: Hibbard (2012) also presents a solution to this problem. I haven't read all of it yet, but it appears to be fairly different from what I suggest in the next section.]

## Solution

Here's what I suggest instead: everything that happens is determined by the program that the world is running on and the agent's actions, so the domain of the utility function should be $Q\times Y^{m}$. The apparent problem with that is that the formula for AI$\xi$ does not contain any mention of elements of $Q$. If we just take the original formula and replace $r\left(x_{k}\right)+...+r\left(x_{m}\right)$

with $U\left(q,\dot{y}_{, it wouldn't make any sense. However, if we expand out $\xi$ in the original formula (excluding the unnecessary denominator), we can move the sum of rewards inside the sum over programs, like this:

${\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\sum_{x_{k}\in X}\max_{y_{k+1}\in Y}\sum_{x_{k+1}\in X}...\max_{y_{m}\in Y}\sum_{x_{m}\in X}\sum_{q\in Q:q\left(y_{\leq m}\right)=x_{\leq m}}\left(r\left(x_{k}\right)+...+r\left(x_{m}\right)\right)2^{-\ell\left(q\right)}}$.

Now it is easy to replace the sum of rewards with the desired utility function.

${\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\sum_{x_{k}\in X}\max_{y_{k+1}\in Y}\sum_{x_{k+1}\in X}...\max_{y_{m}\in Y}\sum_{x_{m}\in X}\sum_{q\in Q:q\left(y_{\leq m}\right)=x_{\leq m}}U\left(q,\dot{y}_{.

With this formulation, there is no danger of the agent wireheading, and all $U$ has to do is compute everything that happens when the agent performs a given sequence of actions in a given program, and decide how desirable it is. If the range of $U$ is unbounded, then this might not converge. Let's assume throughout this post that the range of $U$ is bounded.

[Edit: Hay (2005) presents a similar formulation to this.]

## Extension to infinite lifetimes

The previous discussion assumed that the agent would only have the opportunity to perform a finite number of actions. The situation gets a little tricky when the agent is allowed to perform an unbounded number of actions. Hutter uses a finite look-ahead approach for AI$\xi$, where on each step $k$, it pretends that it will only be performing $m_{k}$ actions, where $\forall k\, m_{k}\gg k$.

${\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\sum_{x_{k}\in X}\max_{y_{k+1}\in Y}\sum_{x_{k+1}\in X}...\max_{y_{m_{k}}\in Y}\sum_{x_{m_{k}}\in X}\left(r\left(x_{k}\right)+...+r\left(x_{m_{k}}\right)\right)\cdot\xi\left(\dot{y}\dot{x}_{.

If we make the same modification to the utility-based variant, we get

${\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\sum_{x_{k}\in X}\max_{y_{k+1}\in Y}\sum_{x_{k+1}\in X}...\max_{y_{m_{k}}\in Y}\sum_{x_{m_{k}}\in X}\sum_{q\in Q:q\left(y_{\leq m_{k}}\right)=x_{\leq m_{k}}}U\left(q,\dot{y}_{.

This is unsatisfactory because the domain of $U$ was supposed to consist of all the information necessary to determine everything that happens, but here, it is missing all the actions after step $m_{k}$. One obvious thing to try is to set $m_{k}:=\infty$. This will be easier to do using a compacted expression for AI$\xi$:

${\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\max_{p\in P:p\left(\dot{x}_{,

where $P$ is the set of policies that map sequences of observations to sequences of actions and $x_{i}^{pq}$ is shorthand for the last observation in the sequence returned by $q\left(p\left(\dot{x}_{. If we take this compacted formulation, modify it to accommodate the new utility function, set $m_{k}:=\infty$, and replace the maximum with a supremum (since there's an infinite number of possible policies), we get

${\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\sup_{p\in P:p\left(\dot{x}_{,

where $y_{i}^{pq}$ is shorthand for the last action in the sequence returned by $p\left(q\left(\dot{y}_{.

But there is a problem with this, which I will illustrate with a toy example. Suppose $Y:=\left\{ a,b\right\}$, and $U\left(q,y_{1:\infty}\right)=0$ when $\forall k\in\mathbb{N}\, y_{k}=a$, and for any $n\in\mathbb{N}$, $U\left(q,y_{1:\infty}\right)=1-\frac{1}{n}$ when $y_{n}=b$ and $\forall k. ($U$ does not depend on the program $q$ in this example). An agent following the above formula would output $a$ on every step, and end up with a utility of $0$, when it could have gotten arbitrarily close to $1$ by eventually outputting $b$.

To avoid problems like that, we could assume the reasonable-seeming condition that if $y_{1:\infty}$ is an action sequence and $\left\{ y_{1:\infty}^{n}\right\} _{n=1}^{\infty}$ is a sequence of action sequences that converges to $y_{1:\infty}$ (by which I mean $\forall k\in\mathbb{N}\,\exists N\in\mathbb{N}\,\forall n>N\, y_{k}^{n}=y_{k}$), then ${\displaystyle \lim_{n\rightarrow\infty}U\left(q,y_{1:\infty}^{n}\right)=U\left(q,y_{1:\infty}\right)}$.

Under that assumption, the supremum is in fact a maximum, and the formula gives you an action sequence that will reach that maximum (proof below).

If you don't like the condition I imposed on $U$, you might not be satisfied by this. But without it, there is not necessarily a best policy. One thing you can do is, on step 1, pick some extremely small $\varepsilon>0$, pick any element from

${\displaystyle \left\{ p^{*}\in P:\sum_{q\in Q}U\left(q,y_{1:\infty}^{p^{*}q}\right)2^{-\ell\left(q\right)}>\left(\sup_{p\in P}\sum_{q\in Q}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}\right)-\varepsilon\right\} }$, and then follow that policy for the rest of eternity, which will guarantee that you do not miss out on more than $\varepsilon$ of expected utility.

## Proof of criterion for supremum-chasing working

definition: If $y_{1:\infty}$ is an action sequence and $\left\{ y_{1:\infty}^{n}\right\} _{n=1}^{\infty}$ is an infinite sequence of action sequences, and $\forall k\in\mathbb{N}\,\exists N\in\mathbb{N}\,\forall n>N\, y_{k}^{n}=y_{k}$, then we say $\left\{ y_{1:\infty}^{n}\right\} _{n=1}^{\infty}$ converges to $y_{1:\infty}$. If $p$ is a policy and $\left\{ p_{n}\right\} _{n=1}^{\infty}$ is a sequence of policies, and $\forall k\in\mathbb{N}\,\forall x_{N\, p\left(x_{, then we say $\left\{ p_{n}\right\} _{n=1}^{\infty}$ converges to $p$.

assumption (for lemma 2 and theorem): If $\left\{ y_{1:\infty}^{n}\right\} _{n=1}^{\infty}$ converges to $y_{1:\infty}$, then ${\displaystyle \lim_{n\rightarrow\infty}U\left(q,y_{1:\infty}^{n}\right)=U\left(q,y_{1:\infty}\right)}$.

lemma 1: The agent described by

${\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\sup_{p\in P:p\left(\dot{x}_{ follows a policy that is the limit of a sequence of policies $\left\{ p_{n}\right\} _{n=1}^{\infty}$ such that

${\displaystyle \lim_{n\rightarrow\infty}\sum_{q\in Q}U\left(q,y_{1:\infty}^{p_{n}q}\right)2^{-\ell\left(q\right)}=\sup_{p\in P}\sum_{q\in Q}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}}$.

proof of lemma 1: Any policy can be completely described by the last action it outputs for every finite observation sequence. Observations are returned by a program, so the set of possible finite observation sequences is countable. It is possible to fix the last action returned on any particular finite observation sequence to be the argmax, and still get arbitrarily close to the supremum with suitable choices for the last action returned on the other finite observation sequences. By induction, it is possible to get arbitrarily close to the supremum while fixing the last action returned to be the argmax for any finite set of finite observation sequences. Thus, there exists a sequence of policies approaching the policy that the agent implements whose expected utilities approach the supremum.

lemma 2: If $p$ is a policy and $\left\{ p_{n}\right\} _{n=1}^{\infty}$ is a sequence of policies converging to $p$, then

${\displaystyle \sum_{q\in Q}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}=\lim_{n\rightarrow\infty}\sum_{q\in Q}U\left(q,y_{1:\infty}^{p_{n}q}\right)2^{-\ell\left(q\right)}}$.

proof of lemma 2: Let $\varepsilon>0$. On any given sequence of inputs $x_{1:\infty}\in X^{\infty}$, $\left\{ p_{n}\left(x_{1:\infty}\right)\right\} _{n=1}^{\infty}$ converges to $p\left(x_{1:\infty}\right)$, so, by assumption, $\forall q\in Q\,\exists N\in\mathbb{N}\,\forall n\geq N\,\left|U\left(q,y_{1:\infty}^{pq}\right)-U\left(q,y_{1:\infty}^{p_{n}q}\right)\right|<\frac{\varepsilon}{2}$.

For each $N\in\mathbb{N}$, let $Q_{N}:=\left\{ q\in Q:\forall n\geq N\,\left|U\left(q,y_{1:\infty}^{pq}\right)-U\left(q,y_{1:\infty}^{p_{n}q}\right)\right|<\frac{\varepsilon}{2}\right\}$. The previous statement implies that ${\displaystyle \bigcup_{N\in\mathbb{N}}Q_{N}=Q}$, and each element of $\left\{ Q_{N}\right\} _{N\in\mathbb{N}}$ is a subset of the next, so

${\displaystyle \exists N\in\mathbb{N}\,\sum_{q\in Q\setminus Q_{N}}2^{-\ell\left(q\right)}<\frac{\varepsilon}{2\left(\sup U-\inf U\right)}}$. The range of $U$ is bounded, so $\sup U$ and $\inf U$ are defined. This also implies that the difference in expected utility, given any information, of any two policies, is bounded. More formally:${\displaystyle \forall Q'\subset Q\,\forall p',p''\in P\,\left|\left(\left(\sum_{q\in Q'}U\left(q,y_{1:\infty}^{p'q}\right)2^{-\ell\left(q\right)}\right)\diagup\left(\sum_{q\in Q'}2^{-\ell\left(q\right)}\right)\right)-\left(\left(\sum_{q\in Q'}U\left(q,y_{1:\infty}^{p''q}\right)2^{-\ell\left(q\right)}\right)\diagup\left(\sum_{q\in Q'}2^{-\ell\left(q\right)}\right)\right)\right|\leq\sup U-\inf U}$,

so in particular,

${\displaystyle \left|\left(\sum_{q\in Q\setminus Q_{N}}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}\right)-\left(\sum_{q\in Q\setminus Q_{N}}U\left(q,y_{1:\infty}^{p_{N}q}\right)2^{-\ell\left(q\right)}\right)\right|\leq\left(\sup U-\inf U\right)\sum_{q\in Q\setminus Q_{N}}2^{-\ell\left(q\right)}<\frac{\varepsilon}{2}}$.

${\displaystyle \left|\left(\sum_{q\in Q}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}\right)-\left(\sum_{q\in Q}U\left(q,y_{1:\infty}^{p_{N}q}\right)2^{-\ell\left(q\right)}\right)\right|\leq\left|\left(\sum_{q\in Q_{N}}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}\right)-\left(\sum_{q\in Q_{N}}U\left(q,y_{1:\infty}^{p_{N}q}\right)2^{-\ell\left(q\right)}\right)\right|+\left|\left(\sum_{q\in Q\setminus Q_{N}}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}\right)-\left(\sum_{q\in Q}U\left(q,y_{1:\infty}^{p_{N}q}\right)2^{-\ell\left(q\right)}\right)\right|<\frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon}$.

theorem:

${\displaystyle \sum_{\dot{q}\in Q}U\left(\dot{q},\dot{y}_{1:\infty}\right)2^{-\ell\left(\dot{q}\right)}=\sup_{p\in P}\sum_{q\in Q}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}}$,

where ${\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\sup_{p\in P:p\left(\dot{x}_{.

proof of theorem: Let's call the policy implemented by the agent $p^{*}$.

By lemma 1, there is a sequence of policies $\left\{ p_{n}\right\} _{n=1}^{\infty}$

converging to $p^{*}$ such that

${\displaystyle \lim_{n\rightarrow\infty}\sum_{q\in Q}U\left(q,y_{1:\infty}^{p_{n}q}\right)2^{-\ell\left(q\right)}=\sup_{p\in P}\sum_{q\in Q}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}}$.

By lemma 2,

${\displaystyle \sum_{q\in Q}U\left(q,y_{1:\infty}^{p^{*}q}\right)2^{-\ell\left(q\right)}=\sup_{p\in P}\sum_{q\in Q}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}}$.

Comments sorted by top scores.

comment by Anja · 2012-12-19T17:41:48.948Z · score: 5 (5 votes) · LW(p) · GW(p)

I like how you specify utility directly over programs, it describes very neatly how someone who sat down and wrote a utility function

$U\(\\.{y}\\.{x}\_{)

would do it: First determine how the observation could have been computed by the environment and then evaluate that situation. This is a special case of the framework I wrote down in the cited article; you can always set

$U\(\\.{y}\\.{x}\_{=\sum_{q:q(y_{1:m_k})=x_{1:m_k}}%20U(q,y_{1:m_k}))

This solves wireheading only if we can specify which environments contain wireheaded (non-dualistic) agents, delusion boxes, etc..

comment by AlexMennen · 2012-12-19T20:13:03.691Z · score: 1 (1 votes) · LW(p) · GW(p)

True, the U(program, action sequence) framework can be implemented within the U(action/observation sequence) framework, although you forgot to multiply by 2^-l(q) when describing how. I also don't really like the finite look-ahead (until m_k) method, since it is dynamically inconsistent.

This solves wireheading only if we can specify which environments contain wireheaded (non-dualistic) agents, delusion boxes, etc..

Not sure what you mean by that.

comment by Anja · 2012-12-20T18:29:57.765Z · score: 2 (2 votes) · LW(p) · GW(p)

you forgot to multiply by 2^-l(q)

I think then you would count that twice, wouldn't you? Because my original formula already contains the Solomonoff probability...

comment by AlexMennen · 2012-12-20T21:18:30.397Z · score: 1 (1 votes) · LW(p) · GW(p)

Oh right. But you still want the probability weighting to be inside the sum, so you would actually need $U\(\\\.\{y\}\\\.\{x\}\_\{=\frac{1}{\xi\left(\dot{y}\dot{x}_{%3Ck}y\underline{x}_{k:m_{k}}\right)}\sum_{q:q(y_{1:m_k})=x_{1:m_k}}%20U(q,y_{1:m_k})2%5E{-\ell\left(q\right)}%0A)

comment by Anja · 2012-12-20T21:36:35.070Z · score: 2 (2 votes) · LW(p) · GW(p)

True. :)

comment by Anja · 2012-12-20T18:25:49.904Z · score: 0 (0 votes) · LW(p) · GW(p)

Let's stick with delusion boxes for now, because assuming that we can read off from the environment whether the agent has wireheaded breaks dualism. So even if we specify utility directly over environments, we still need to master the task of specifying which action/environment combinations contain delusion boxes to evaluate them correctly. It is still the same problem, just phrased differently.

comment by AlexMennen · 2012-12-20T21:21:39.893Z · score: 0 (0 votes) · LW(p) · GW(p)

If I understand you correctly, that sounds like a fairly straightforward problem for AIXI to solve. Some programs q_1 will mimic some other program q_2's communication with the agent while doing something else in the background, but AIXI considers the possibilities of both q_1 and q_2.

comment by lukeprog · 2012-12-17T12:30:08.079Z · score: 5 (5 votes) · LW(p) · GW(p)

Thanks for putting in all this work!

I recommend you post this to the MAGIC list and ask for feedback. You'll also see a discussion of Anja's post there.

comment by V_V · 2014-02-19T14:48:19.167Z · score: 2 (2 votes) · LW(p) · GW(p)

Formulas aren't being rendered correctly, they appear as latex code.

comment by AlexMennen · 2014-02-19T22:06:30.307Z · score: 0 (0 votes) · LW(p) · GW(p)

Looks like it's back up now.

comment by AlexMennen · 2014-02-19T16:46:08.762Z · score: 0 (0 votes) · LW(p) · GW(p)

That appears to be because the formulas are images hosted by codecogs.com, which is down.

comment by Alex_Altair · 2013-01-03T21:59:00.644Z · score: 0 (0 votes) · LW(p) · GW(p)

This is great!

I really like your use of $U\(q,y\_\{1:\\infty\}\$). The seems to be an important step along the way to eliminating the horizon problem. I recently read in Orseau and Ring's "Space-Time Embedded Intelligence" that in another paper, "Provably Bounded-Optimal Agents" by Russell and Subramanian, they define $V\(\\pi, q\$%20=%20u(h(\pi,%20q))) where$h\(\\pi, q\$) generates the interaction history $ao\_\{1:\\infty\}$. (I have yet to read this paper.)

Your counterexample of supremum chasing is really great; it breaks my model of how utility maximization is supposed to go. I'm honestly not sure whether one should chase the path of U = 0 or not. This is clouded by the fact that the probabilistic nature of things will probably push you off that path eventually.

The dilemma reminds me a lot of exploring versus exploiting. Sometimes it seems to me that the rational thing for a utility maximizer to do, almost independent of the utility function, would be to just maximize the resources it controlled, until it found some "end" or limit, and then spend all its resources creating whatever it wanted in the first place. In the framework above we've specified that there is no "end" time, and AIXI is dualist, so there are no worries of it getting destroyed.

There's something else that is strange to me. If we are considering infinite interaction histories, then we're looking at the entire binary tree at once. But this tree has uncountably infinite paths! Almost all of the (infinite) paths are incomputable sequences. This means that any computable AI couldn't even consider traversing them. And it also seems to have interesting things to say about the utility function. Does it only need to be defined over computable sequences? What if we have utility over incomptuable sequences? These could be defined by second-order logic statements, but remain incomputable. It gives me lots of questions.

comment by AlexMennen · 2013-01-03T23:10:28.029Z · score: 0 (0 votes) · LW(p) · GW(p)

I'm honestly not sure whether one should chase the path of U = 0 or not. This is clouded by the fact that the probabilistic nature of things will probably push you off that path eventually.

Making the assumption that there is a small probability that you will deviate from your current plan on each future move, and that these probabilities add up to a near guarantee that you will eventually, has a more complicated effect on your planning than just justifying chasing the supremum.

For instance, consider this modification to the toy example I gave earlier. Y:={a,b,c}, and if the first b comes before the first c, then the resulting utility is 1 - 1/n, where n is the index of the first b (all previous elements being a), as before. But we'll change it so that the utility of outputting an infinite stream of a is 1. If there is a c in your action sequence and it comes before the first b, then the utility you get is -1000. In this situation, supremum-chasing works just fine if you completely trust your future self: you output a every time, and get a utility of 1, the best you could possibly do. But if you think that there is a small risk that you could end up outputting c at some point, then eventually it will be worthwhile to output b, since the gain you could get from continuing to output a gets arbitrarily small compared to the loss from accidentally outputting c.

There's something else that is strange to me. If we are considering infinite interaction histories, then we're looking at the entire binary tree at once. But this tree has uncountably infinite paths! Almost all of the (infinite) paths are incomputable sequences. This means that any computable AI couldn't even consider traversing them. And it also seems to have interesting things to say about the utility function. Does it only need to be defined over computable sequences? What if we have utility over incomptuable sequences? These could be defined by second-order logic statements, but remain incomputable. It gives me lots of questions.

I don't really have answers to these questions. One thing you could do is replace the set of all policies (P) with the set of all computable policies, so that the agent would never output an uncomputable action sequence [Edit: oops, not true. You could consider only computable policies, but then end up at an uncomputable policy anyway by chasing the supremum].

comment by Alex_Altair · 2013-01-04T09:58:42.207Z · score: 0 (0 votes) · LW(p) · GW(p)

Making the assumption that...

Yeah, I was intentionally vague with "the probabilistic nature of things". I am also thinking about how any AI will have logical uncertainty, uncertainty about the precision of its observations, et cetera, so that as it considers further points in the future, its distribution becomes flatter. And having a non-dualist framework would introduce uncertainty about the agent's self, its utility function, its memory, ...

comment by Anja · 2012-12-19T18:08:56.625Z · score: 0 (0 votes) · LW(p) · GW(p)

I think there is something off with the formulas that use policies: If you already choose the policy

$p:p\(x\_\{=y_{%3Ck}y_k)

then you cannot choose an y_k in the argmax.

Also for the Solomonoff prior you must sum over all programs

$q:q\(y\_\{1:m\_k\}\$=x_{1:m_k}) .

Could you maybe expand on the proof of Lemma 1 a little bit? I am not sure I get what you mean yet.

comment by AlexMennen · 2012-12-19T20:08:35.367Z · score: 0 (0 votes) · LW(p) · GW(p)

If you already choose the policy ... then you cannot choose an y_k in the argmax.

The argmax comes before choosing a policy. In $\{\\displaystyle\\arg\\max\_\{y\_\{k\}\\in Y\}\\sup\_\{p\\in P:p\\left \\dot\{x\}\_\{k\}\\right=\\dot\{y\}\_\{k\}y\_\{k\}\}\.\.\.\}$, there is already a value for y_k before you consider all the policies such that p(x_<k) = y_<k y_k.

Also for the Solomonoff prior you must sum over all programs

Didn't I do that?

Could you maybe expand on the proof of Lemma 1 a little bit?

Look at any finite observation sequence. There exists some action you could output in response to that sequence that would allow you to get arbitrarily close to the supremum expected utility with suitable responses to the other finite observation sequences (for instance, you could get within 1/2 of the supremum). Now look at another finite observation sequence. There exists some action you could output in response to that, without changing your response to the previous finite observation sequence, such that you can get arbitrarily close to the supremum (within 1/4). Look at a third finite observation sequence. There exists some action you could output in response to that, without changing your responses to the previous 2, that would allow you to get within 1/8 of the supremum. And keep going in some fashion that will eventually consider every finite observation sequence. At each step n, you will be able to specify a policy that gets you within 2^-n of the supremum, and these policies converge to the policy that the agent actually implements.

I hope that helps. If you still don't know what I mean, could you describe where you're stuck?

comment by Anja · 2012-12-20T21:35:12.625Z · score: 0 (0 votes) · LW(p) · GW(p)

I get that now, thanks.

comment by Thomas · 2012-12-17T19:37:00.321Z · score: 0 (0 votes) · LW(p) · GW(p)

I wouldn't use N element of N. But rather n element of N or something. Just a small nitpick.

EDIT: I was watching on my laptop and didn't see that the second N is a bit fatter.

comment by AlexMennen · 2012-12-17T20:03:41.520Z · score: 1 (1 votes) · LW(p) · GW(p)

$\\mathbb\{N\}$ means the set of natural numbers.

comment by Thomas · 2012-12-17T20:21:47.990Z · score: 0 (0 votes) · LW(p) · GW(p)

Yes, the fat N. It's clear in a higher resolution.