A model of UDT with a halting oracle

post by cousin_it · 2011-12-18T14:18:56.858Z · LW · GW · Legacy · 102 comments

Contents

  But why? What's the point of this result?
None
102 comments

This post requires some knowledge of mathematical logic and computability theory. The basic idea is due to Vladimir Nesov and me.

Let the universe be a computer program U that can make calls to a halting oracle. Let the agent be a subprogram A within U that can also make calls to the oracle. The source code of both A and U is available to A.

Here's an example U that runs Newcomb's problem and returns the resulting utility value:

  def U():
    # Fill boxes, according to predicted action.
    box1 = 1000
    box2 = 1000000 if (A() == 1) else 0
    # Compute reward, based on actual action.
    return box2 if (A() == 1) else (box1 + box2)

A complete definition of U should also include the definition of A, so let's define it. We will use the halting oracle only as a provability oracle for some formal system S, e.g. Peano arithmetic. Here's the algorithm of A:

  1. Play chicken with the universe: if S proves that A()≠a for some action a, then return a.
  2. For every possible action a, find some utility value u such that S proves that A()=a ⇒ U()=u. If such a proof cannot be found for some a, break down and cry because the universe is unfair.
  3. Return the action that corresponds to the highest utility found on step 2.

Now we want to prove that the agent one-boxes, i.e. A()=1 and U()=1000000. That will follow from two lemmas.

Lemma 1: S proves that A()=1 ⇒ U()=1000000 and A()=2 ⇒ U()=1000. Proof: you can derive that from just the source code of U, without looking at A at all.

Lemma 2: S doesn't prove any other utility values for A()=1 or A()=2. Proof: assume, for example, that S proves that A()=1 ⇒ U()=42. But S also proves that A()=1 ⇒ U()=1000000, therefore S proves that A()≠1. According to the first step of the algorithm, A will play chicken with the universe and return 1, making S inconsistent unsound (thx Misha). So if S is sound, that can't happen.

We see that the agent defined above will do the right thing in Newcomb's problem. And the proof transfers easily to many other toy problems, like the symmetric Prisoner's Dilemma.

But why? What's the point of this result?

There's a big problem about formalizing UDT. If the agent chooses a certain action in a deterministic universe, then it's a true fact about the universe that choosing a different action would have caused Santa to appear. Moreover, if the universe is computable, then such silly logical counterfactuals are not just true but provable in any reasonable formal system. When we can't compare actual decisions with counterfactual ones, it's hard to define what it means for a decision to be "optimal".

For example, one previous formalization searched for formal proofs up to a specified length limit. Problem is, that limit is a magic constant in the code that can't be derived from the universe program alone. And if you try searching for proofs without a length limit, you might encounter a proof of a "silly" counterfactual which will make you stop early before finding the "serious" one. Then your decision based on that silly counterfactual can make it true by making its antecedent false... But the bigger problem is that we can't say exactly what makes a "silly" counterfactual different from a "serious" one.

In contrast, the new model with oracles has a nice notion of optimality, relative to the agent's formal system. The agent will always return whatever action is proved by the formal system to be optimal, if such an action exists. This notion of optimality matches our intuitions even though the universe is still perfectly deterministic and the agent is still embedded in it, because the oracle ensures that determinism is just out of the formal system's reach.

P.S. I became a SingInst research associate on Dec 1. They did not swear me to secrecy, and I hope this post shows that I'm still a fan of working in the open. I might just try to be a little more careful because I wouldn't want to discredit SingInst by making stupid math mistakes in public :-)

102 comments

Comments sorted by top scores.

comment by JoshuaZ · 2011-12-19T03:17:57.355Z · LW(p) · GW(p)

This is one of the most interesting posts I've seen on Less Wrong. It is a non-trivial result and it goes a long way towards dealing with a lot of the problems with UDT. I'm deeply impressed and wish I had some helpful comment to make other than general praise.

Replies from: cousin_it
comment by cousin_it · 2011-12-19T05:00:45.439Z · LW(p) · GW(p)

Thanks!

comment by Vladimir_Nesov · 2011-12-18T16:02:52.112Z · LW(p) · GW(p)

The basic idea is due to Vladimir Nesov and me.

(In the setting where there were no oracles, this problem was generally discussed on the decision theory list, with various conditions used to prove when intended moral arguments [A=A1=>U=U1] will in fact be proven by the agent (are agent-provable, not just provable). The "chicken rule" condition used here I figured out in explicit form sometime in April ("To avoid confusion, immediately perform any action that implies absurdity"), though I'm not sure anyone published an actual proof using this condition. A few days ago, cousin_it posted a thought experiment involving a Turing oracle, and I pointed out that using the chicken rule in this setting elicits a much nicer result.)

Replies from: cousin_it
comment by cousin_it · 2011-12-18T16:05:03.865Z · LW(p) · GW(p)

That's an accurate account, thanks for posting it here! I'm always worried about coming off as though I'm grabbing priority...

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-12-18T16:13:07.731Z · LW(p) · GW(p)

Thanks for writing this up. I'm currently trying to repurpose my failed unpublished post draft from August (on acausal dependencies) to this result, to give a more systematic account of the problem it addresses, and describe the no-oracles case in more detail. Hope to get it done in about a week.

Edit: And posted, 2.5 months late.

comment by AlephNeil · 2011-12-18T18:13:26.233Z · LW(p) · GW(p)

But the bigger problem is that we can't say exactly what makes a "silly" counterfactual different from a "serious" one.

Would it be naive to hope for a criterion that roughly says: "A conditional P ⇒ Q is silly iff the 'most economical' way of proving it is to deduce it from ¬P or else from Q." Something like: "there exists a proof of ¬P or of Q which is strictly shorter than the shortest proof of P ⇒ Q"?

A totally different approach starts with the fact that your 'lemma 1' could be proved without knowing anything about A. Perhaps this could be deemed a sufficient condition for a counterfactual to be serious. But I guess it's not a necessary condition?

Replies from: cousin_it
comment by cousin_it · 2011-12-18T18:51:39.711Z · LW(p) · GW(p)

Both these approaches have been proposed on the workshop list. Good job figuring them out so quickly!

Would it be naive to hope for a criterion that roughly says: "A conditional P ⇒ Q is silly iff the 'most economical' way of proving it is to deduce it from ¬P or else from Q." Something like: "there exists a proof of ¬P or of Q which is strictly shorter than the shortest proof of P ⇒ Q"?

I can make such a criterion fall into nasty Loebian traps by maliciously tweaking the formal system to make some proofs longer than others. That means any proof of correct behavior (like one-boxing) must rely on the intimate details of the proof enumeration order, but we have no idea how to talk formally about such things.

A totally different approach starts with the fact that your 'lemma 1' could be proved without knowing anything about A. Perhaps this could be deemed a sufficient condition for a counterfactual to be serious. But I guess it's not a necessary condition?

A doesn't necessarily get the code of U neatly factored into A and everything else. The agent has to find copies of itself in the universe, it doesn't get told the positions of all copies explicitly. Note that if we replace the U in the post with some other U' that can be proven equivalent to U by S, then A can notice that equivalence, unscramble the code of U' into U, and win.

Replies from: None
comment by [deleted] · 2011-12-18T19:51:31.914Z · LW(p) · GW(p)

Something related to the 'most economical proof' thought is the following: suppose both P and Q depend on some variable x, and the conditional P(x) ⇒ Q(x) is true for all values of x. Then it is silly if either ¬P(x) holds for all x, or if Q(x) holds for all x.

The tricky thing would be to introduce x in a meaningful way. In the case where we want to prove conditionals of the form "agent does X ⇒ world does Y", we want to avoid ending up with a conditional that's true because we prove "agent doesn't do X" (I think we're actually alright with an unconditional proof of "world does Y"). So we want to make the agent's actions somehow depend on whatever x is. For instance, if we gave the agent a very very low probability of picking a random action (and let x be the source of randomness), would that do anything?

Replies from: cousin_it
comment by cousin_it · 2011-12-18T19:57:50.533Z · LW(p) · GW(p)

That solution has also been discussed on the list. Unfortunately, for it to work as intended, the different copies of the agent must use correlated random variables, which is not very realistic in problems like the Prisoner's Dilemma.

Replies from: None
comment by [deleted] · 2011-12-18T20:55:44.335Z · LW(p) · GW(p)

Right, but I'm not suggesting necessarily that the agent actually make decisions randomly, merely that we use this as an aid in proofs. For instance, suppose we assume that all agents (including A itself) have a probability of p (independently) of being replaced by a different algorithm, which just acts randomly. Then, we can try to prove that Pr[world does Y|agent does X] in this probability space approaches 1 as p approaches 0 (colloquially, "agent does X ⇒ world does Y with high probability").

Then if we end up accidentally proving silly things, then we merely end up proving that Pr[agent does X] goes to 0 (because it doesn't happen except when actions are chosen at random). However, the conditional probability shouldn't be affected.

For instance, take Newcomb's problem. The terrible thing in the usual case is that we stumble on a proof that A()=2, so then it follows that A()=1 => U() = 0, which scares us so we output A()=2.

Here that can't happen. Suppose we stumbled on a proof that A()=2 w.h.p. Then when we calculate the conditional probability Pr[U() = 0 | A() = 1], we calculate the probability Pr[U() = 0 and A() = 1] and end up with p^2/4 -- if even one of U() and A() ends up what it's supposed to be, this doesn't happen. So the conditional probability is p/2, which still goes to 0. On the other hand, when we calculate Pr[U() = 1000000 | A() = 1] = Pr[U() = 1M and A() = 1] / Pr[A() = 1], the numerator is p^2/4 + p/2(1-p) -- in the case where A went haywire but U didn't, this is what happens. So this conditional probability is 1 - p/2, which goes to 1. We believe this over the other probability, so we actually output A()=1 -- and this shouldn't happen.

Replies from: cousin_it
comment by cousin_it · 2011-12-20T15:01:01.426Z · LW(p) · GW(p)

I tried to parse your comment but couldn't. What are expressions like Pr[U()=0 | A()=1] supposed to mean? What event is "A()=1"? Newcomb's problem makes two calls to A() which affect the resulting utility differently. If the two instances of A always get "randomly replaced" together or not at all, then I agree that it solves Newcomb's problem, but I think the assumption is too strong. On the other hand, if they get "randomly replaced" independently, I think you need to give a more careful argument, and also I think it won't work :-(

Replies from: None
comment by [deleted] · 2011-12-20T17:28:47.494Z · LW(p) · GW(p)

This is why I defined the probability space to be that, instead of A sometimes doing something random, there's a low probability that A is replaced with a different agent that always does something random. I don't see why the assumption is too strong. We can define the probability space any way we like, since we don't actually have to implement it, all we need is to be able to prove things about the probability space.

Now that I say it carefully, it's somewhat reminiscent of the problem you're always objecting to: that we can't separate A from the rest of the universe. But if we can pick out the things that are "agents" -- basically, if we pick out anything that's not immediately predictable, and I think that can be made rigorous -- then we can make this definition.

Oh, but in the actual Newcomb's problem, the two calls to A are actually calls to different but identical routines, aren't they? Are they? One of them is A's actual thought process, the other is Omega's absolutely perfect prediction of A's thought process. But on the other hand, none of the proofs go through if you can't verify that the two copies are the same, which is equivalent to making them the same subroutine.

Replies from: cousin_it
comment by cousin_it · 2011-12-21T13:09:16.538Z · LW(p) · GW(p)

Yeah, the problem in the non-oracle setting is about separating A from the rest of the universe. I feel that any good solution to this problem should be "crisp" rather than delegated to A's fuzzy reasoning abilities, because at this point in our research we're not yet trying to make a good optimizer, but trying to define mathematically what optimization means in the first place.

comment by orthonormal · 2011-12-18T19:19:34.300Z · LW(p) · GW(p)

Congratulations! This is a key step forward. And also, congrats to the SIAI for getting both you and Vladimir Nesov as official researchers.

Replies from: cousin_it
comment by cousin_it · 2011-12-18T20:08:46.443Z · LW(p) · GW(p)

Thanks!

In case anyone's interested: I'm not a paid researcher and don't want to become one, as long as I can support myself by programming.

Replies from: DanielVarga
comment by DanielVarga · 2011-12-18T20:21:39.051Z · LW(p) · GW(p)

Congratulations! Do you mind an off-topic question? I see you work at Google. What's Google's attitude toward such extracurricular activities? Obviously, I am not asking about specific negotiations, but about written and unwritten company rules. For example, can you dedicate Google company time to SingInst work? (I am thinking of the famous Google 20% rule here.)

Replies from: cousin_it
comment by cousin_it · 2011-12-18T20:27:44.733Z · LW(p) · GW(p)

The answer is refreshingly boring: I can do this stuff, but not on company time. 20% time is supposed to be used for Google projects.

Replies from: SilasBarta
comment by SilasBarta · 2011-12-19T22:02:59.322Z · LW(p) · GW(p)

Sorry, I just had this image:

cousin_it: I want to use my 20% time to prevent the extermination of humanity.
Google overlord: Okay, and this would help Google ... how, exactly?

(I know, I know, oversimplification.)

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2011-12-21T00:36:49.875Z · LW(p) · GW(p)

Ads^^^3

Replies from: Nisan
comment by Nisan · 2012-01-17T05:56:27.438Z · LW(p) · GW(p)

Heh.

comment by lukeprog · 2012-01-11T04:08:35.729Z · LW(p) · GW(p)

Is this the first time an advanced decision theory has had a mathematical expression rather than just a verbal-philosophical one?

This totally deserves to be polished a bit and published in a mainstream journal.

Replies from: cousin_it, Vladimir_Nesov
comment by cousin_it · 2012-01-11T18:20:33.472Z · LW(p) · GW(p)

Is this the first time an advanced decision theory has had a mathematical expression rather than just a verbal-philosophical one?

That's a question of degree. Some past posts of mine are similar to this one in formality.

Nesov also said in an email on Jan 4 that now we can write this stuff up. I think Wei and Gary should be listed as coauthors too.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-01-12T10:35:56.353Z · LW(p) · GW(p)

I still want to figure out games (like PD) in the oracle setting first. After the abortive attempt on the list, I didn't yet get around to rethinking the problem. Care to take a stab?

Replies from: cousin_it
comment by cousin_it · 2012-01-12T12:41:27.210Z · LW(p) · GW(p)

The symmetric case (identical payoffs and identical algorithms) is trivial in the oracle setting. Non-identical algorithms seems to be moderately difficult, our candidate solutions in the non-oracle setting only work because they privilege one of the outcomes apriori, like Loebian cooperation. Non-identical payoffs seems to be very difficult, we have no foothold at all.

I think we have a nice enough story for "fair" problems (where easy proofs of moral arguments exist), and no good story for even slightly "unfair" problems (like ASP or non-symmetric PD). Maybe the writeup should emphasize the line between these two kinds of problems. It's clear enough in my mind.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-01-12T17:39:11.221Z · LW(p) · GW(p)

Part of the motivation was to avoid specifying agents as algorithms, specifying them as (more general) propositions about actions instead. It's unclear to me how to combine this with possibility of reasoning about such agents (by other agents).

Replies from: cousin_it
comment by cousin_it · 2012-01-12T18:32:50.760Z · LW(p) · GW(p)

That's very speculative, I don't remember any nontrivial results in this vein so far. Maybe the writeup shouldn't need to wait until this gets cleared up.

comment by Vladimir_Nesov · 2012-01-12T10:35:35.995Z · LW(p) · GW(p)

Is this the first time an advanced decision theory has had a mathematical expression rather than just a verbal-philosophical one?

(It's not "advanced", it's not even in its infancy yet. On the other hand, there is a lot of decision theory that's actually advanced, but solves different problems.)

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-06-16T08:48:08.418Z · LW(p) · GW(p)

I think Luke meant "advanced" as in superrationality, not "advanced" as in highly developed.

BTW, nice work.

comment by Vladimir_Nesov · 2011-12-18T15:55:05.006Z · LW(p) · GW(p)

In contrast, the new model with oracles has a nice notion of optimality, relative to the agent's formal system.

Specifically, given any formal system S for reasoning about the world and agent's place in it, the chicken rule (step 1) forces S to generate consistent theories of consequences for all possible actions. This seems to crack a long-standing problem in counterfactual reasoning, giving a construction for counterfactual worlds (in form of consistent formal theories) from any formal theory that has actual world as a model.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-12-21T15:51:23.513Z · LW(p) · GW(p)

...and the construction turns out not as interesting as I suspected. Something like this is very easy to carry out by replacing the agent A with another that can't be understood in S, but is equivalent to A (according to a system stronger than S). As a tool for understanding decision problems, this is intended to solve the problem of parsing the world in terms of A, finding how it depends on A, finding where A is located in the world, but if we can find all instances of A in the world to perform such surgery on them, we've already solved the problem!

Perhaps A can decide to make itself incomprehensible to itself (to any given S, rather), thus performing the transformation without surgery, formalization of free will by an act of then-mysterious free will? This could still be done. But it's not clear if this can be done "from the outside", where we don't have the power of making A transform to make the dependence of the world on its actions clear.

comment by Zetetic · 2011-12-18T21:13:21.042Z · LW(p) · GW(p)

According to the first step of the algorithm, A will play chicken with the universe and return 1, making S inconsistent. So if S is consistent, that can't happen.

Is this right? I'm wondering if you're assuming soundness relative to the natural semantics about A, because in step one, it isn't clear that there is a contradiction in S rather than a failure of S to be sound with respect to the semantics it's supposed to model (what actions A can take and their utility). I might be confused, but wouldn't this entail contradiction of the soundness of S rather than entailing that S is inconsistent? S would only be inconsistent if it can prove both A() = a and A()≠a, but unless you have further hidden assumptions about S I don't see why A returning a would entail that S proves A() = a.

This is how I want to interpret this: S is some deduction system capable of talking about all actions A() can make, and proving some range of utility results about them. S is also consistent and sound.

Play chicken with the universe: if S proves that A()≠a for some action a, then return a.

If S proves for all a there is some u such that [A() = a ⇒ U() = u] , output argmax (a) else exit and output nothing.

Since proving A can't take an action a ( that is, A()≠a ) entails that S is not sound (because A will take such an action in step 1), S can't prove any such result. Also, since proving that an action has two distinct utility values leads to A≠a, the soundness and consistency of S entails that this can't happen. Does this seem right?

Also, step two seems too strong. Wouldn't it suffice to have it be:

1) For all actions a, if there is some u such that [A() = a ⇒ U() = u] and u > 0, (), else add [A() = a ⇒ U() = 0] to the axioms of S (only for the duration of this decision)

2) output argmax (a)

My thought is that there could be some possible actions the agent can take that might not have provable utility and it seems like you should assign an expected utility of 0 to them (no value being privileged, it averages out to 0), but if you can prove that at least one action has positive utility, then you maximize expected utility by choosing the one with the highest positive utility.

This is weaker than the current step two but still seems to have the desired effect. Does this hold water or am I missing something?

Replies from: cousin_it
comment by cousin_it · 2011-12-18T21:22:12.560Z · LW(p) · GW(p)

I'm wondering if you're assuming soundness relative to the natural semantics about A

You're right. Misha already pointed that out and I edited the post.

For all actions a, if there is some u such that [A() = a ⇒ U() = u] and u > 0, else add [A() = a ⇒ U() = 0] to the axioms of S

Yeah, such a variation would work, but I'm uncomfortable calling A the optimal algorithm in a universe where some actions don't have provable utility. Such universes often arise from non-symmetric multiplayer games and I'd rather not insinuate that I have a good solution for those.

comment by snarles · 2011-12-25T11:10:06.041Z · LW(p) · GW(p)

Would it be correct to say that S can't prove Lemma 2?

Replies from: cousin_it
comment by cousin_it · 2011-12-25T13:15:15.277Z · LW(p) · GW(p)

Yes.

comment by Quill_McGee · 2014-06-08T15:29:50.114Z · LW(p) · GW(p)

"S proves that A()=1 ⇒ U()=42. But S also proves that A()=1 ⇒ U()=1000000, therefore S proves that A()≠1" I don't see how this follows. Perhaps it is because, if the system was sound, it would never prove more than one value for U() for a given a, therefore by the principle of explosion it proves A()≠1? But that doesn't seem to actually follow. I'm aware that this is an old post, but on the off chance that anyone ever actually sees this comment, help would be appreciated.

comment by Decius · 2012-08-29T06:32:07.864Z · LW(p) · GW(p)

Suppose S proves that A()≠1 and A()≠2. What does A return?

Replies from: Kindly
comment by Kindly · 2012-08-29T12:28:20.120Z · LW(p) · GW(p)

A will check for one of those first, and then return it. The order in which it checks 1 and 2 doesn't matter, because this can only happen if S is unsound.

Replies from: Decius
comment by Decius · 2012-08-29T20:40:04.575Z · LW(p) · GW(p)

So, the primary point is to make it trivial to show that there is no proof in S regarding the output of A?

Replies from: Kindly
comment by Kindly · 2012-08-29T21:14:01.773Z · LW(p) · GW(p)

Exactly. We're worried about proving something like "A()=1 ⇒ U()=0" by proving "A()≠1", which would convince A() not to return 1. This avoids shortcuts of that nature.

Replies from: Decius
comment by Decius · 2012-08-29T22:27:01.914Z · LW(p) · GW(p)

So, in the example given, it is clear that A()=1. and also clear that S cannot prove that A()=1. What system are we using to show that A()=1, and why do we believe that that system is sound?

EDIT: More generally, is there any sound formal system T which can prove that A()=1 for all sound formal systems S that can prove that A()=1 ⇒ U()=1000000 and A()=2 ⇒ U()=1000?

If so, then T cannot prove that A()=1 ⇒ U()=1000000 and A()=2 ⇒ U()=1000, why do we believe that? If not, why do we believe that A()=1?

Replies from: Kindly
comment by Kindly · 2012-08-29T23:57:30.477Z · LW(p) · GW(p)

Presumably whichever system one typically uses to reason about the soundness of Peano arithmetic would suffice.

Considering that the algorithm involved is already impossible to implement by virtue of calling a halting oracle, I don't see what further harm it does to make the additional assumption that Peano arithmetic is sound.

Replies from: Decius
comment by Decius · 2012-08-30T03:01:08.630Z · LW(p) · GW(p)

The provable oracle is only needed for the general case implementation: If you have a proof that S does not prove A()≠n for any n, and a the further proofs that A()=n ⇒ U()=u for each n, then the need for the oracle is satisfied. If you replace the oracle with a routine which sometimes breaks down and cries, this specific example might still work.

What I'm saying is that either there exists a sound system which can prove the three statements A()=1, A()=1 ⇒ U()=1000000 and A()=2 ⇒ U()=1000, or there does not exist such a sound system. If such a system exists, then it does not prove that A()=1 ⇒ A()≠2 (or any other value), and therefore does not contain Peano arithmetic. Since the proof here does make all three claims, either the system used here either does not contain PA or is unsound. Since PA is assumed whenever numbers are used, the system is unsound.

The claim that the algorithm will do the right thing in Newcomb's problem is suspect, because any formal system that can use the proof provided is unsound.

Replies from: Kindly
comment by Kindly · 2012-08-30T17:53:24.566Z · LW(p) · GW(p)

If such a system exists, then it does not prove that A()=1 ⇒ A()≠2 (or any other value)

Why not?

Replies from: Decius
comment by Decius · 2012-08-30T18:23:42.811Z · LW(p) · GW(p)

Because such a system would prove that A()≠2, which would result in A() returning 2 in the first step, when that system is used as the system S. Since it proves something which is not true, it is unsound.

A sound system need not include the axioms of PA to include the axiom "Anything that PA proves is true".

Such a system T can be sound and still prove that A()=1, but cannot prove that A()=1 ⇒ A()≠2, because:

T does not contain the axiom A=n ⇒ A≠m, for m≠n. Therefore A()=1 ⇒ A()≠2 is not provable in T

PA does not prove (T proves A()=1 ⇒ A()≠2), and does not include the axiom "Anything that T proves is true." A()=1 is not provable in PA, and therefore A()≠2 is not provable in PA.

No sound system that can be used as S can prove that A()≠2: Since U only requires that S to be sound and be able to prove A()=a ⇒ U()=u for all permissible a, no sound system can prove A()=a ⇒ U()=u for all permissible a and A()≠2.

Replies from: Kindly
comment by Kindly · 2012-08-30T18:32:47.214Z · LW(p) · GW(p)

Wait, no, no, no. The system in which we are working to analyze the algorithm A() and prove that it works correctly is not the same as the system S used by A()! Although S is not powerful enough to prove A()≠2 -- this would require that S is aware of its own soundness -- we can pick a system S that we know is sound, or can assume to be sound, such as Peano arithmetic, and based on that assumption we can prove that A()=1.

Replies from: Decius
comment by Decius · 2012-08-31T16:09:05.259Z · LW(p) · GW(p)

What are the requirements of S? The only ones given is that S is a formal system and we know that S proves A()=a ⇒ U()=u for all permissible a.

Why does the system we are are using to evaluate A() not meet the requirements of S?

I understand the point is to formalize a system where we take the action which results in the best outcome, but why is it important that S not be able to prove that A()≠2 or A()≠1? No function can evaluate it's own output and use that as a factor in determining its output, and any counterfactual proof along the lines of

A()≠2 ⇒ (A()=2 ⇒ U()=3^^^3)

cannot force A() to return 2, because A()≠2 has already been proven.

Replies from: Kindly
comment by Kindly · 2012-08-31T16:47:16.136Z · LW(p) · GW(p)

What are the requirements of S? The only ones given is that S is a formal system and we know that S proves A()=a ⇒ U()=u for all permissible a.

Just assume S is Peano arithmetic. Then the proof in the post is probably valid in something like ZFC.

Why does the system we are are using to evaluate A() not meet the requirements of S?

The algorithm would work fine if S were ZFC, it's just that then we would need to talk about the soundness of ZFC in the proof.

any counterfactual proof along the lines of A()≠2 ⇒ (A()=2 ⇒ U()=3^^^3) cannot force A() to return 2, because A()≠2 has already been proven.

We're not worried about that kind of counterfactual proof, for the reasons you give. We're worried about self-fulfilling counterfactual proofs. Suppose A() proves that A()=1, then concludes that box2 = 0, and uses that to prove that 1 is the best thing to return. This isn't impossible, because everything S proves did turn out to be correct; it just leads to A() one-boxing.

Replies from: Decius
comment by Decius · 2012-08-31T17:21:30.553Z · LW(p) · GW(p)

Just assume S is Peano arithmetic. Then the proof in the post is probably valid in something like ZFC.

Can this scenario be established in ZFC or PA?

How do you prove A()=1 without proving that A()=1?

Where P(V) designates the existence of a proof in S that V is true:

P(V)⇒V (because S is sound)

P(A()=1) ⇒ P(A()≠2) ⇒ P((A()=2 ⇒ U()=3^^^3)) (counterfactual)

P(A()=2 ⇒ U()=3^^^3) ∪ P(A()1=⇒ U()=1000000) ⇒ A()=2 (because A() will return the a with the highest proven U())

∴ P(A()=1) ⇒ A()=2 ∴ S cannot prove that A()=1 unless S is unsound

UNLESS: we have a system S which cannot prove a counterfactual; in such a system A()≠2 does not imply (A()=2 ⇒ U()=3^^^3) Since counterfactuals are not useful, I don't see a disadvantage in not being able to prove them.

Replies from: Kindly
comment by Kindly · 2012-08-31T17:33:04.579Z · LW(p) · GW(p)

P(A()=1) ⇒ P(A()≠2) ⇒ P((A()=2 ⇒ U()=3^^^3))

I find myself unable to parse this.

Edit: but I think I know what the problem is anyway. Although A() could eventually find a proof of A()=2 ⇒ U()=3^^^3 if there is a proof of A()≠2, it could also find a proof of A()=2 ⇒ U()=-3^^^3, because it's not specified which order the proofs will be checked in.

However, if we ensure that no proof of A()≠2 exists, then there exists at most one u, for which S proves that A()=a ⇒ U()=u. This means that the above isn't a problem.

Replies from: Decius
comment by Decius · 2012-08-31T17:46:33.914Z · LW(p) · GW(p)

S proves A()=1 implies S proves A()≠2

S proves A()≠2 implies that S proves (A()=2 ⇒ U()=3^^^3)

(S proves (A()=2 ⇒ U()=3^^^3), union with S proves (A()=1⇒ U()=1000000)), implies that A()=2

Therefore, if S proves A()=1, then A()=2

Replies from: Kindly
comment by Kindly · 2012-08-31T18:47:29.875Z · LW(p) · GW(p)

Right, so I answered this in the edit before you made this reply, but just to reiterate: just because S proves (A()=2 ⇒ U()=3^^^3, and (A()=1⇒ U()=1000000), doesn't mean that A() will return 2. A() just seizes on the first values u1 and u2 for which S proves A()=1⇒U()=u1 and A()=2⇒U()=u2.

If we believe that S is sound, then these values will happen to be such that the utility for the one A() does return is correct: either A()=1 and u1=1000 (and u2 can be anything), or A()=2 and u2=1000000 (and u1 can be anything). However, we would like it to be true that u1=1000 and u2=1000000, no matter what A() returns, because this lets A() make the correct decision.

To do this,we play chicken. Now the counterfactual proofs we have been looking at die. However, the proof that A()=1⇒U()=1000 by looking at U()'s code sticks around. This ensures that this will be the proof that A() finds.

Replies from: Decius
comment by Decius · 2012-09-01T01:50:41.860Z · LW(p) · GW(p)

So long as the result of A() depends on what S can prove it's results imply, S cannot prove anything about the results of A() and still be sound. I demonstrated that if S is sound, it cannot prove that A()=1.

My confusion was in trying to apply an unsound system that is more general and native to me: Common sense, combined with a random assortment of logical statements that are true in PA but not in common sense.

Common sense proves that A()=1 easily, which implies A()≠2, but in sound common sense, A()≠2 does not imply (A()=2 ⇒ U()=3^^^3).

The system S "Sound common sense, plus the axiom 'Whatever PA proves is true'" works perfectly both for S and to prove that A()=1. It's just not yet possible to list all of the axioms of that system, nor to prove that we are working in that system at any time.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-09-01T02:19:50.535Z · LW(p) · GW(p)

Common sense proves that A()=1 easily, which implies A()≠2, but in sound common sense, A()≠2 does not imply (A()=2 ⇒ U()=3^^^3).

It does imply that. What happens is that you (in "common sense" system) don't prove A()=2 as a consequence of proving (say) both (A()=1 ⇒ U()=1000000) and (A()=2 ⇒ U()=3^^^3), since the value of A() is determined by what S can prove, not by what "common sense" can prove.

Replies from: Decius
comment by Decius · 2012-09-01T04:12:11.512Z · LW(p) · GW(p)

No, I'm using common sense both to evaluate A() and as S. In sound common sense, A()≠2 does not imply (A()=2 ⇒ U()=3^^^3).

That's because in common sense A()=2 ⇒ U()=1000 is true and provable, and (A()=2 ⇒ U()=1000) implies (A()=2 ⇒ U()≠3^^^3)), and in common sense (A()=2 ⇒ U()≠3^^^3)) and (A()=2 ⇒ U()=3^^^3)) are contradicting statements rather than proof that A()≠2.

The soundness of common sense is very much in dispute, except in the case of common sense along with some basic axioms of fully described formal systems- common sense with those added axioms is clearly unsound.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-09-01T04:52:15.149Z · LW(p) · GW(p)

In sound common sense, A()≠2 does not imply (A()=2 ⇒ U()=3^^^3).

Okay, then I don't know what kind of reasoning system this "common sense" is or how to build an inference system that implements it to put as S in A(). As a result, discussing it becomes unfruitful unless you give more details about what it is and/or motivation for considering it an interesting/relevant construction.

(What I wanted to point out in the grandparent is that the wrong conclusion can be completely explained by reasoning-from-outside being separate from S, without reasoning-from-outside losing any standard properties.)

comment by [deleted] · 2011-12-31T09:20:47.243Z · LW(p) · GW(p)

Is there a reason you're using logical conditionals rather than Pearlian counterfactual conditionals like TDT?

comment by Douglas_Knight · 2011-12-19T01:17:30.589Z · LW(p) · GW(p)

What about the prisoner's dilemma?
Here's a concrete example of a game between non-identical agents: agent A of this post, who maximizes outcome, and agent B, who cooperates iff the other player plays the same move.

You seem to imply that you consider this approach practically equivalent to your earlier approach with bounded proof length. Is that right?

But in your first post, you say you're not sure what happens with two player games, while in your second, you say "if agent A tries to reason about agent B in the same way, it will fail miserably." What changed your mind? Do you still hold this position? Did you consider this PD pairing back then?

Replies from: cousin_it
comment by cousin_it · 2011-12-19T04:25:11.568Z · LW(p) · GW(p)

This post says only that the result transfers to the symmetric PD, i.e. the case with identical agents. I still don't have any good solution for other cases.

Here's a concrete example of a game between non-identical agents: agent A of this post, who maximizes outcome, and agent B, who cooperates iff the other player plays the same move.

B cooperates iff S proves that A()=B(), right? In this case, if A and B are using the same S, I think A will cry and B will defect :-( Here's why:

1) S cannot prove any statement like "S cannot prove X", because that requires S to prove the consistency of S.

2) Therefore S cannot prove something like "S cannot prove that A doesn't cooperate".

3) Therefore S cannot prove that A won't play chicken on step 1.

4) Therefore S cannot prove that A()=B().

5) Therefore B will defect, and S cannot prove that.

6) Therefore A can't find a complete set of proofs saying all of A's actions imply B's defection.

7) If A can find a complete set of proofs saying one of A's actions implies B's cooperation, then A will do that action, making the counterfactual true. But B defects, so that can't happen.

8) By (6) and (7), the only remaining possibility is that A can't find a complete set of proofs at all, so it breaks down and cries.

I haven't checked the above argument very carefully, so it can contain many mistakes. Please recheck before accepting it!

ETA: Douglas found a fatal mistake in the argument, so don't bother checking :-)

Replies from: Douglas_Knight
comment by Douglas_Knight · 2011-12-19T05:27:28.396Z · LW(p) · GW(p)

I don't see how you got from 3) S cannot prove that A won't play chicken to 4) S cannot prove that A()=B(). That inference seems way too general; it should apply to any agent of this type. In particular, S certainly can prove that A()=A(). Perhaps 3 is supposed to break some symmetry between A and B? Perhaps I wasn't clear that I wanted B to play chicken, too. That restores the symmetry, if that's what the point of 4 was.

I also think that there is a subtle gap in 5 => 6. If S could prove that all of A's actions imply B's defection, then S could prove that B defects, hence contradiction. But A's decision doesn't need to examine all of A's actions: crying can be omitted. I doubt that this actually helps, though.

Replies from: cousin_it, cousin_it
comment by cousin_it · 2011-12-19T20:36:01.793Z · LW(p) · GW(p)

I can't solve your problem yet, but I found a cute lemma. Let P be the proposition "A()=a" where a is the first action inspected in step 1 of A's algorithm.

  1. (S⊢¬P)⇒P (by inspection of A)

  2. S⊢((S⊢¬P)⇒P) (S can prove 1)

  3. (S⊢(S⊢¬P))⇒(S⊢P) (unwrap 2)

  4. (S⊢¬P)⇒(S⊢(S⊢¬P)) (property of any nice S)

  5. (S⊢¬P)⇒(S⊢P) (combine 3 and 4)

  6. (S⊢¬P)⇒(S⊢(P∧¬P)) (rephrasing of 5)

  7. (S⊢¬P)⇒¬Con(S)

All the above steps can also be formalized within S, so each player knows that if any player plays chicken with the first inspected action, then S is inconsistent. The proof generalizes to the second inspected action, etc., by looking at the first one that yields a contradiction. But if S is inconsistent, then it will make all players play chicken. So if one player plays chicken, then all of them do, and that fact is provable within S.

Did you manage to make any progress?

Replies from: Nisan, Douglas_Knight, Douglas_Knight
comment by Nisan · 2012-01-29T07:36:23.700Z · LW(p) · GW(p)

The proof generalizes to the second inspected action, etc., by looking at the first one that yields a contradiction.

I tried this in the case that the output of A provably lies in the set {a,b}. I only managed to prove

(S⊢¬P)⇒¬Con(S+Con(S))

where P is the proposition "A()=b" where b is the second inspected action. But this still implies

if one player plays chicken, then all of them do, and that fact is provable within S.

Replies from: cousin_it
comment by cousin_it · 2012-01-29T11:18:43.375Z · LW(p) · GW(p)

Thanks! I think you found a real hole and the conclusion is also wrong. Or at least I don't see how

(S⊢¬P)⇒¬Con(S+Con(S))

implies the conclusion.

Replies from: Nisan
comment by Nisan · 2012-01-29T19:18:00.615Z · LW(p) · GW(p)

The conclusions that I think I can draw are

¬(S⊢A()=a)∧(S⊢A()=b) ⇒ Con(S)∧¬Con(S+Con(S)) ⇒ A()=b

So if one player plays chicken on the second inspected action, then all of them do.

Replies from: cousin_it
comment by cousin_it · 2012-01-29T21:15:08.453Z · LW(p) · GW(p)

I'm still not getting it. Can you explain the proof of the following:

Con(S)∧¬Con(S+Con(S)) ⇒ A()=b

Replies from: Nisan
comment by Nisan · 2012-01-30T00:59:37.580Z · LW(p) · GW(p)
  1. ¬Con(S+Con(S)) ⇒ S⊢¬Con(S)

  2. ⇒ S⊢Prv(A()≠a) (if S is inconsistent, then it proves anything)

  3. ⇒ S⊢A()=a (because A plays chicken)

  4. ⇒ S⊢A()≠b

  5. Con(S)∧¬Con(S+Con(S)) ⇒ Con(S)∧(S⊢A()≠b) (follows directly from 4)

  6. ⇒ (S⊬A()=b)∧(S⊢A()≠b) (a consistent theory can't prove a proposition and its negation)

  7. ⇒ (S⊬A()≠a)∧(S⊢A()≠b) (here I'm assuming that S⊢(A()=a∨A()=b). I don't know what to do if A can choose between more than two alternatives.)

  8. ⇒ A()=b (A plays chicken on the second inspected action)

comment by Douglas_Knight · 2011-12-19T22:34:14.270Z · LW(p) · GW(p)

Cool.

I had a lot of trouble reading this because in my mind ⇒ binds tighter than ⊢. When I figured it out, I was going to suggest that you use spaces to hint at parsing, but you already did. I don't know what would have helped.

comment by Douglas_Knight · 2011-12-20T19:25:12.253Z · LW(p) · GW(p)

Since we like symmetry, I'm going to change notation from A and B to I and O for "I" and "opponent." (or maybe "input" and "output")

We should be careful about the definition of B. Simply saying that it cooperates if I()=O() causes it to blow up against the defectbot. Instead, consider the propositions PC: I()=C ⇒ O()=C and PD: I()=D ⇒ O()=D. We really mean that B should cooperate if S proves P=PC∧PD. What if it doesn't? There are several potential agents: B1 defects if S doesn't prove P; B2 defects if S proves ¬P, but breaks down and cries if it is undecidable; B3 breaks down if either PC and PD are undecidable, but defects they are both decidable and one is false. B3 sounds very similar to A and so I think that symmetry proves that they cooperate together. If we modified A not to require that every action had a provable utility, but only that one action had a utility provably as big as all others, then I think it would cooperate with B2.

These examples increase my assessment of the possibility that A and B1 cooperate.

(I'm ignoring the stuff about playing chicken, because the comment I'm responding to seems to say I can.)

Replies from: cousin_it
comment by cousin_it · 2011-12-22T13:32:49.572Z · LW(p) · GW(p)

B3 sounds very similar to A and so I think that symmetry proves that they cooperate together. If we modified A not to require that every action had a provable utility, but only that one action had a utility provably as big as all others, then I think it would cooperate with B2.

I think your conclusions can be right, but the proofs are vague. Can you debug your reasoning like you debugged mine above?

comment by cousin_it · 2011-12-19T05:33:12.923Z · LW(p) · GW(p)

Nice catches! You're right on both points. I have to go to sleep now, will figure this out tomorrow.

comment by FAWS · 2011-12-19T00:34:38.155Z · LW(p) · GW(p)

Wouldn't the same job be done by the agent using proper counterfactuals instead of logic ones, which seems like something that would also be needed for other purposes?

I don't know who (if anyone) has done any work on this, but when a human considers a counterfactual statement like "If Gore won in 2000" that is very underspecified, because the implicit assumption is to discard contradicting knowledge, but how to do that exactly is left open. Humans just know that they should assume something like "Bush didn't win Florida" instead of "266 > 271".

If an example agent needs to be able to use precisely defined proper counterfactuals I think it might be possible to do that with an ordering function for its current knowledge. The agent would start with the couterfactual under consideration, add items from its knowledge base in the order specified for that counterfactual, test for each item whether it can find a contradiction, and discard the current item whenever it finds a contradiction (from consideration for evaluating the counterfactual).

For the example I think the order would look like this: A=a, S, U except for A, the source code of A.

That would do much the same thing as "playing chicken with the universe" with respect of not being impressed by proofs about its output, no?

More generally I think the items would be further split up, particularly "A behaves like a program with this source code in all other cases" would come before "A behaves like a program with this source code in the case under consideration". Other instances of A would also have to be treated as instances of A and not as programs with the source code (i.e. statements like "A' has the same source code as A" would come before either of their source codes in the ordering).

Does that make sense?

Replies from: cousin_it
comment by cousin_it · 2011-12-19T04:51:59.361Z · LW(p) · GW(p)

The idea of using an ordering function for knowledge is new to me, thanks!

For the example I think the order would look like this: A=a, S, U except for A, the source code of A.

The hard part is getting "U except for A". Given the source code of U and the source code for A, we don't know how to "factor" one program by another (or equivalently, find all correlates of the agent in the universe). If we knew how to do that, it would help a lot with UDT in general.

Replies from: FAWS
comment by FAWS · 2011-12-19T07:05:43.214Z · LW(p) · GW(p)

I guess you would actually have various knowledge items about the world, some of them implying things about A, and any that in conjunction with the others so far cause a contradiction with A=a that the agent can find would be discarded. Maybe that already would be enough; I'm not sure.

Replies from: cousin_it
comment by cousin_it · 2011-12-20T18:17:25.521Z · LW(p) · GW(p)

What considerations should be used to order the knowledge items?

Replies from: FAWS
comment by FAWS · 2011-12-20T19:57:41.313Z · LW(p) · GW(p)

That's a really difficult question. It's hard to say what principles humans follow when evaluating counterfactuals, and even harder to say in how far that's a reasonable example to follow.

I think higher level observational laws should usually have a higher priority than concrete data points they are based on, and all else equal they should be in descending order of generality and confidence. That the US. president can veto US federal legislation and that the person who can veto US federal legislation is the same person as the commander in chief of the US military forces should both have a higher priority than that George W. Bush could veto US federal legislation.

It would also depend on what the counterfactual is used for. For counterfactuals concerning the past timing would obviously extremely important.

In the case of considering the counterfactual implications of a decision the agent makes you could try ascending order of strength as Bayesian evidence about the agent as a secondary criterion, maybe? Or perhaps instead ratio of that strength to general importance? (Which would probably require nested counterfactuals? Are we concerned with computability yet?)

EDIT: I think the knowledge items would have redundancy so that even if the agent can derive itself directly from the laws of physics and needs to reject (one of) them it can reconstruct almost normal physics from various observational laws. It also seems redundancy could reduce the importance of the initial order somewhat.

comment by Shmi (shminux) · 2011-12-18T21:20:48.736Z · LW(p) · GW(p)

Stupid question: A() always returning 1 manifestly solves the Newcomb problem, no fancy proofs required. What am I missing?

Replies from: cousin_it
comment by cousin_it · 2011-12-18T21:23:30.578Z · LW(p) · GW(p)

The implementation of A in the post works correctly for many different U's.

comment by DanArmak · 2011-12-18T14:51:16.146Z · LW(p) · GW(p)

You put the oracle outside the universe, but the logical impossibility of an oracle is still visible from inside the universe. For example, this agent (EDIT simplified from original comment):

  1. Choose any two different actions a, b.
  2. If S can prove A()=a, return b, proving S inconsistent.
  3. Otherwise return a, proving S incomplete (i.e. undeserving of the name oracle).
  4. The universe breaks down and cries because A is unfair.
  5. Profit???

I don't see why exploring a universe with such contradictions is useful for UDT, can you explain?

Replies from: cousin_it
comment by cousin_it · 2011-12-18T14:55:55.855Z · LW(p) · GW(p)

The universe defined in the post doesn't have any contradictions. Every oracle studied in theoretical CS is "incomplete" in the sense that it can't answer all questions about programs that invoke the oracle itself, e.g. the halting oracle cannot answer all questions about programs that can invoke the halting oracle. But it can answer some questions about such programs, which turns out to be enough for the agent to make a decision.

Replies from: None, DanArmak
comment by [deleted] · 2011-12-18T16:41:34.727Z · LW(p) · GW(p)

I am not convinced that the oracle can answer any questions about programs that call the oracle. That is simply not one of the things it does.

The halting problem corresponds to a language L which consists of all pairs (P, w) such that program P halts on input w. Then the oracle is defined to return 1 if you give it a pair in L, and 0 otherwise. But to do this, we have to have L defined already, so that definition cannot refer to this oracle. A recursive definition "L is the set of programs with an oracle for L that halt" is just ill-defined.

Replies from: cousin_it
comment by cousin_it · 2011-12-18T16:52:18.437Z · LW(p) · GW(p)

We're not asking the oracle directly about the halting behavior of programs that call the oracle. Instead we're using it as a provability oracle, asking it only questions of the form "if a program exhaustively searches for proofs of such-and-such statement in the formal system S, will it ever terminate?"

Since the statements we're trying to prove, in turn, refer to the behavior of programs that call the oracle, we need S to have some assumptions about the behavior of the oracle. These assumptions can be pretty weak, e.g. the proof in the post seems to go through even if S knows only that the oracle is a pure function that returns identical outputs for identical inputs. Or you could make the assumptions stronger, e.g. formalize the notion of "halting oracle" within S. That wouldn't break anything either, I think.

Replies from: None
comment by [deleted] · 2011-12-18T17:56:50.028Z · LW(p) · GW(p)

I see, thanks, that clears it up.

However, now I have an objection to a different step! (which I'm sure you will also address)

But S also proves that A()=1 ⇒ U()=1000000, therefore S proves that A()≠1. According to the first step of the algorithm, A will play chicken with the universe and return 1, making S inconsistent.

Just to clarify, let's say that the first step of A first checks if S proves that A()≠2, and then checks if S proves that A()≠1 (we could do this in either order, it doesn't matter, but this order makes the following explanation make sense better).

S proves that A()≠1. However, S will only be inconsistent if S can also prove that A()=1. To prove that A()=1, S needs to prove that S proves that A()≠1 (does this still make sense?). Furthermore, S needs to prove that S does not prove that A()≠2 (otherwise, A will perversely return 2 before it considers perversely returning 1).

In particular, the statement "S does not prove that A()≠2" implies "S is consistent" and therefore S cannot prove it.

I also have the suspicion that "S proves that A()≠1" is not something S can prove, but my logic-fu is not strong enough to decide one way or the other.

Replies from: cousin_it
comment by cousin_it · 2011-12-18T18:46:18.263Z · LW(p) · GW(p)

Let's step through the algorithm. If S proves that A()≠2, then A will immediately return 2. That means S has proved a false statement about the universe. If S doesn't prove that A()≠2, but proves that A()≠1, then A will immediately return 1, which also means S has proved a false statement about the universe.

So yeah, I guess calling it a violation of consistency was wrong. It's actually a violation of soundness: all axioms of S must be true statements about the universe. If S is sound, then the above reasoning shows that S cannot prove A()≠a for any a. (Also soundness implies consistency, which might explain why I made the mistake.) Thanks for pointing that out! Edited the post accordingly.

Replies from: None, Vladimir_Nesov
comment by [deleted] · 2011-12-18T19:13:59.605Z · LW(p) · GW(p)

Alright, that makes sense (if my reading of proofs seems uncharitable at times, it is because I know enough about logic to know when a statement doesn't make sense, but I don't know enough to tell what the statement wants to be).

Soundness seems like an interesting property to think about for a formal system. I am reminded of the bizarre systems you can get, e.g., by taking PA and adding the axiom "PA is inconsistent". This, if I recall correctly, is consistent provided PA itself is consistent, but (whether or not it's consistent) it definitely can't be sound.

Replies from: cousin_it
comment by cousin_it · 2011-12-18T19:17:35.290Z · LW(p) · GW(p)

if my reading of proofs seems uncharitable at times

I want people to read my math uncharitably and poke holes in it, otherwise I wouldn't post :-)

comment by Vladimir_Nesov · 2011-12-18T20:19:43.677Z · LW(p) · GW(p)

If S was sound-if-consistent, this problem makes it unsound, hence inconsistent.

comment by DanArmak · 2011-12-18T15:25:29.562Z · LW(p) · GW(p)

Do we know in advance which questions are going to be answered? Do we know agent A from your OP isn't going to end up crying in step 2?

I apologize if I'm just asking about standard CS assumptions here. I have a CS bachelor's degree but I don't remember ever discussing programs that can call halting oracles. Either my memory is faulty (more likely) or I chose the wrong electives.

Replies from: cousin_it
comment by cousin_it · 2011-12-18T15:38:03.080Z · LW(p) · GW(p)

It's easy to write a universe program U that would make A cry. The post only proves that A won't end up crying for one specific U, and outlines an argument why it won't cry in some other similar problems.

The class of universes where A doesn't end up crying is supposed to roughly correspond to the informal class of "fair" decision problems where the agent's action is the only thing that determines the resulting utility, but I have no strong argument why it's a good formalization of that class.

One example of an "unfair" decision problem would be a universe that rewarded you for having a specific algorithm, rather than for returning a specific value. Such "unfair" problems can be made up to punish any decision theory you can come up with, so they're probably not a good test case.

Replies from: DanArmak, whpearson
comment by DanArmak · 2011-12-18T16:17:13.379Z · LW(p) · GW(p)

Thanks for explaining.

comment by whpearson · 2012-01-02T02:34:09.347Z · LW(p) · GW(p)

Is anyone at all working on classes of "unfair" problems, such as ones that give different utilities based upon the amount of time spent computing? Or ones that take into consideration any type of resource used to make that decision (energy or memory). This class seems important to me and less arbitrary than "unfair" problems that punish specific algorithms.

Replies from: cousin_it, cousin_it
comment by cousin_it · 2012-01-02T15:21:58.535Z · LW(p) · GW(p)

Wei Dai has a tentative decision theory that covers some of those cases. I didn't find it very convincing, but it's likely that I overlooked something. Any work on such problems would be very welcome, of course.

Replies from: whpearson
comment by whpearson · 2012-01-02T16:28:01.046Z · LW(p) · GW(p)

I'll have a think. An optimal decision maker for all scenarios seems impossible if your utility is reduced by an amount proportional to the time take to make the decision ("solving death" has this structure, less people die if you solve it earlier). The best in general I can think of is an infinite table mapping scenarios to a the decision computed by something like your UDT + oracle for that scenario. And this can be beaten in each individual scenario by a specialised algorithm for that scenario, that needs no look up.

And it still has an infinite quantity which I don't like in my theories that I might want to connect to the real world one day (and requires an infinite amount of precomputation).

I wonder if there is a quality apart from strict optimality that we need to look for. Making the optimal decision in most problems( what is the correct weighting of scenarios)? Making the right decision eventually?

Anyway I'll think some more. It is definitely thornier and nastier than "fair" problems.

comment by cousin_it · 2012-03-31T08:28:53.894Z · LW(p) · GW(p)

I recently made some progress on your question. Section 4 seems to be the most relevant.

comment by gRR · 2012-03-10T19:20:20.037Z · LW(p) · GW(p)

Another possible modification of the algorithm:

  1. Remove the playing chicken step.
  2. Keep the step 2 - proving "A()=a ⇒ U()=u".
  3. Change the choice rule to:
    Return the action for which S does not prove that A()≠a, and which corresponds to the highest utility found on step 2 among all such actions.
Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-03-10T19:52:29.268Z · LW(p) · GW(p)

Your scheme doesn't guarantee that all actions will be considered, or that a known collection of proofs of statements [A()=a ⇒ U()=u] will generate the action that agent actually chooses. If S proves that A()=a1, a1 is what your agent will choose, without any consequentialist reason for this choice apparent in the decision algorithm. You are letting S decide for you, implementing a kind of fatalist decision-making, asking "What do the laws of nature say I'm going to do?" instead of "What should I do?" The chicken rule (diagonal step) is what provides these guarantees (there are likely other ways to this same end).

Replies from: gRR
comment by gRR · 2012-03-10T20:23:13.285Z · LW(p) · GW(p)

I visualized the process like this: S would start proving things. It will notice that "if S does not prove A()!=a AND S does not prove A()!=b AND AgentStep2Proves("A()=a => U()=u") AND AgentStep2Proves("A()=b => U()=w") AND u>w THEN A()!=b", so it would prove A()!=b. Etc.

Which is wrong since it makes S look on itself, which would make it inconsistent. Except it is inconsistent anyway...

On the other hand, this method still does not let the agent prove its decision, since it would make the step 2 explode. Damn, I'm stupid :(

But this should work. It's just TDT, I think. Does it have serious drawbacks that make UDT/ADT desirable?

comment by gRR · 2012-03-10T17:45:15.165Z · LW(p) · GW(p)

I still have doubts about the consistency of this architecture. What if the agent sees a copy of itself perform some action in a situation exactly equal to the situation in which the agent finds itself now. Would it not mean that the agent can now prove that it would perform the same action? (There would be a difference between the agent and the copy, but only an "additive" difference - the agent will have additional knowledge that the copy wouldn't - so whatever the copy proved, the agent must also be able to prove. [And this fact would be provable to the agent!]).

An example situation would be the Parfit's hitchhiker. If the system finds itself in town, saved by the driver, would it not be able to prove that it will cooperate?

In fact, the system has its own source code, so won't it directly simulate itself in any case, as a side-effect of running S? Guess it's just a standard proof why S is impossible...

comment by gRR · 2012-02-23T02:12:19.368Z · LW(p) · GW(p)

Is it possible to change the definition of A() to something like this:
Let UX(X) be the function that results from the source of U() if all calls "A()" were changed to "eval(X)".
For some action p, prove that for any possible agent X, UX(X) <= UX("return p").
return p.

Then the agent would be able to prove that it returns a specific action, and yet there would be no contradictions.

It seems to me this should be better than the "chicken rule" (it's really funny and clever, though! :)). But I'm getting no response to similar comments in other posts. Am I making some ridiculous mistake here?

Replies from: None
comment by [deleted] · 2012-02-23T06:57:23.121Z · LW(p) · GW(p)

So, one of the classical examples of what we want this decision algorithm to do is the following: if it is playing a Prisoner's dilemma game with an identical algorithm, then it should cooperate. The informal reasoning (which this post is trying to formalize) is the following: the agent proves that the output of A() and of A'() are always equal, compares the values of "A() = A'() = cooperate" and "A() = A'() = defect" and decides to cooperate.

The key point is that the agent is not initially told which other agents are identical to it. So your suggestion fails to work because if we replace "A()" with "eval(cooperate)" or "eval(defect)" then we end up proving things about how the agent A'() plays against a CooperateBot or a DefectBot. We conclude that A'() defects against both of these, and end up defecting ourselves.

Replies from: gRR, gRR
comment by gRR · 2012-03-10T18:38:44.590Z · LW(p) · GW(p)

Ok, modification:

def U():
box1 = 1000
box2 = (A_1() == 2) ? 0 : 1000000
return box2 + ((A_2() == 2) ? box1 : 0)

or

def U():
if(A_1() == 'C' and A_2() == 'C') return 2;
if(A_1() == 'D' and A_2() == 'C') return 3;
...

def A():
Forall i try to prove A()==A_i()
Let UX(X) be the function that results from the source of U() if all calls "A_i()", for which the system was able to prove A()==A_i() were changed to "eval(X)".
For some action p, prove that for any possible action q, UX("return q") <= UX("return p").
Return p.

What results should be equivalent to TDT, I think.

Replies from: None
comment by [deleted] · 2012-03-14T18:35:18.702Z · LW(p) · GW(p)

If every step in that code is valid, then that would work. In my opinion the shakiest step in your algorithm is the "Forall i" in the first line of A(). I'll probably reason myself into a hole if I try to figure out whether or not it's valid.

Replies from: gRR
comment by gRR · 2012-03-14T19:07:22.903Z · LW(p) · GW(p)

Well, at least in some cases (e.g., if A's code is equal to A_i's code) the proof is immediate. Proof is also possible for minor obvious modifications, like variable renaming, so the agent should still behave better than a quining agent for PD.

My goal for inventing this is to have a consistent agent that can know (can prove) its decision beforehand, at least sometimes, without getting into contradictions...

comment by gRR · 2012-02-23T07:15:05.307Z · LW(p) · GW(p)

No, it wouldn't defect against itself, because UX will call the same eval("return p") twice:

UX(X) = { return PDPayoffMatrix[eval(X), eval(X)]; }

The payoff with p=cooperate is greater, so the agent will cooperate.

EDIT: Sorry, that was all wrong, and missed your point. Thinking now...