Decision Theories: A Semi-Formal Analysis, Part I

post by orthonormal · 2012-03-24T16:01:33.295Z · LW · GW · Legacy · 90 comments

Contents

  Or: The Problem with Naive Decision Theory
    Idea 1: Play defense with a Nash equilibrium
    Idea 2: Inference
    Problem: You can't deduce everything
    Naive Decision Theory
    Problem: Beware self-fulfilling prophecies!
    Can NDT be salvaged?
    Well, now what?
    Notes:
None
90 comments

Or: The Problem with Naive Decision Theory

Previously: Decision Theories: A Less Wrong Primer

Summary of Sequence: In the context of a tournament for computer programs, I give almost-explicit versions of causal, timeless, ambient, updateless, and several other decision theories. I explain the mathematical considerations that make decision theories tricky in general, and end with a bunch of links to the relevant recent research. This sequence is heavier on the math than the primer was, but is meant to be accessible to a fairly general audience. Understanding the basics of game theory (and Nash equilibria) will be essential. Knowing about things like Gödel numbering, quining and Löb's Theorem will help, but won't be required.

Summary of Post: I introduce a context in which we can avoid most of the usual tricky philosophical problems and formalize the decision theories of interest. Then I show the chief issue with what might be called "naive decision theory": the problem of spurious counterfactual reasoning. In future posts, we'll see how other decision theories get around that problem.

In my Decision Theory Primer, I gave an intuitive explanation of decision theories; now I'd like to give a technical explanation. The main difficulty is that in the real world, there are all sorts of complications that are extraneous to the core of decision theory. (I'll mention more of these in the last post, but an obvious one is that we can't be sure that our perception and memory match reality.)

In order to avoid such difficulties, I'll need to demonstrate decision theory in a completely artificial setting: a tournament among computer programs.

You're a computer programmer entering a tournament for spectacular stakes. But you won't be competing in person: instead, you'll be submitting code for a program to represent you, and that program will be competing one-on-one with other programs in a series of games.

You don't know what the specific games are, but the contest has specified some ground rules:

So, what kind of program should you write?

In the next few posts, we'll examine several ideas, problems and decision theories, increasing our sophistication as we go. We'll use X to denote your program, and x1, . . . , xn to denote its possible outputs; likewise, your opponent in the current round is Y with possible outputs y1, . . . , ym. We'll let U(xi,yj) denote the resulting payout to you if X outputs xi and Y outputs yj.

Idea 1: Play defense with a Nash equilibrium

In our example, we know what utility function the opponent should have: its own expected payout.0 Any such game has at least one Nash equilibrium, a pair of strategies (which may be mixed) such that if X and Y both adopted them, then neither would be better off unilaterally switching strategies. In that sense, at least, if X plays a Nash equilibrium, it can be sure of not being exploited by Y. (In particular, X will never end up tricked into cooperating in the Prisoner's Dilemma while Y defects.)

Of course, there may be more than one Nash equilibrium in a game, and these may be of unequal value if the game is non-zero-sum. (Coordination problems are tricky in general.) So this is underspecified; still, choosing an arbitrary Nash equilibrium is a decent backup strategy. But we can often do better:

Idea 2: Inference

The most basic intuition a human being has in this situation is to start trying to deduce things about Y's output from its source code, or even deduce things directly about U. This idea is best illustrated by playing Rock, Paper, Scissors against a player who always throws Rock: if you figure this out, then you should of course play Paper rather than the Nash equilibrium of 1/3 Rock, 1/3 Paper, 1/3 Scissors. (And in a coordination game, you'd prefer to settle on the same Nash equilibrium that Y outputs.)

Automating inference is an exceedingly difficult problem in general, though researchers have made substantial progress. All the decision theories we'll talk about will include some sort of "inference module", which can be applied to the source code of Y to deduce its output, applied to the code of the full round (including X, Y, and the payoff matrix) to deduce the value of U, etc.

Problem: You can't deduce everything

Gödel's First Incompleteness Theorem and the Halting Problem both imply that it's impossible to write a program that correctly deduces in general1 the output of arbitrary other programs. So we have to be prepared for our inference module to fail sometimes.

A well-written inference module will either return a correct answer for a question or return "Unknown"; a sloppily-written module can get stuck in an infinite process, and a badly-written one will return an incorrect answer sometimes. It should be clear that we'll want our inference module to be of the first sort.

It seems we have enough already to define our first candidate decision theory:

Naive Decision Theory

Let's first consider the approach that seems most obvious. Since we know the source code of the entire round (including X and Y), we could implement the following program:

This "naive decision theory" certainly qualifies for our tournament; it may be a bit trickier to write an inference module that does an open-ended search for the value of U, but it's not impossible (since human mathematicians solve open-ended deduction problems all the time). And it looks like the worst-case scenario is a Nash equilibrium, not total exploitation. What could possibly go wrong?

Problem: Beware self-fulfilling prophecies!

There's a reason that we don't normally ask an automated theorem prover to consider questions about its own mathematical structure: if we ask the question in a certain way, any choice becomes a self-fulfilling prophecy.

If X deduces its own output by a valid process, then it's created a self-fulfilling prophecy for its output, and the problem with that is that a bad self-fulfilling prophecy is just as consistent as a good one. If we want to use statements like "if (output X)=xi then U=ui" to make our final choice, then we have to beware the other half of logical implication, that "not P" implies "if P then Q". This allows for what we might call spurious counterfactuals, which can throw off the actual decision in a perfectly self-consistent way.

Consider, for example, the one-player game where X gets $1 for outputting a, or $10 for outputting b. We want X to do the following:

But it's just as consistent for X to do the following:

How could that possibly work? Since (output X)=a, the third line is a true logical statement! It's like the fact that you can prove anything if you assume a falsehood (though rather than unconditionally accepting a false premise, X is using a false premise as the antecedent of a material conditional). In this example, the very order in which X looks for proofs (which is part of the definition of X) affects which counterfactuals X can and cannot prove. (One important thing to note is that X cannot prove a spurious counterfactual about the action that it does output, only about the ones that it doesn't!)

I don't need to tell you that the second chain of proofs is not what we want X to do. Worse, if this is a real bug, then it could also be an exploitable vulnerability: if your source code for X were released in advance of the tournament, then other programmers might write programs that cause X to generate spurious counterfactuals for all but the moves that are most favorable to Y.

Can NDT be salvaged?

Let's consider some quick fixes before we give up on Naive Decision Theory.

Can we simply prohibit X from ever deducing "(output X)=xi" as a step? This doesn't work because of the possibility of indirect self-reference; X could end up deducing some seemingly innocuous statements which happen to correspond to its own Gödel numbering, and the spurious counterfactuals would follow from those- without X ever having noticed that it had done anything of the sort. And it's provably impossible for X to recognize every possible Gödel numbering for its own inference module!

Next, it might seem like an inference module should stumble on the "genuine" counterfactuals before running into spurious ones, since the "genuine" ones seem necessarily simpler. However, it turns out (as proved by cousin_it) that one can write a valid but malicious inference module which returns and acts on a spurious proof, and (as proved by gRR) that a game with malicious code can similarly dupe a NDT agent with a good inference module!

Lastly, it seems safer to deduce counterfactuals "if (output X)=xi then (output Y)=yj" and apply the U(xi,yj) afterwards. And indeed, I can't see how to make Y exploit X in a straight-up Prisoner's Dilemma if that algorithm is used. There are still two problems, though. First, this algorithm now depends on the values U(xi,yj) being given to it by authority- it can't safely deduce them from the source code for the game. And secondly, it could two-box on Newcomb's Problem and defect against itself in the Prisoner's Dilemma if it finds spurious counterfactuals there.

Thus it seems we'll need to do something substantially different.

Well, now what?

There are several ways we could write a decision theory to do inference without risking spurious counterfactuals, and indeed the decision theories we discuss on Less Wrong correspond to different valid approaches. The differences in their decisions come not from better-written inference modules, but from more effective strategies for using their inference module. In the posts to come, I'll show you how they work in detail.

Next: Part II: Causal Decision Theory and Substitution

Notes:

0. Things get wacky if we don't know the utility function of the opponent; fortunately, even the special cases like Newcomb's predictor can be expressed as expected utility maximizers for some payoff matrix (in this case, one where the predictor gets rewarded when it matches your decision exactly).

1. That "in general" is important: it's quite possible to write programs that deduce the outputs of plenty of other programs. It just so happens that there's always some program that your inference module will fail on. The classic way to generate a failure case is to run the inference module on a modified version of itself, such that returning a correct answer would induce a contradiction. This isn't just a hypothetical disability: if X is trying to deduce the output of Y in this round, and Y is trying to deduce the output of X in this round, then we might have exactly this problem!

90 comments

Comments sorted by top scores.

comment by Sniffnoy · 2012-03-24T18:21:53.507Z · LW(p) · GW(p)

Markup note: Footnote links should be made relative so they don't force a reload of the page due to its URL having been changed.

Replies from: orthonormal
comment by orthonormal · 2012-03-24T18:28:10.641Z · LW(p) · GW(p)

How do I do that? Currently the link is of the form:

/lw/axl/#utilities

Replies from: Vladimir_Nesov, Nisan
comment by Vladimir_Nesov · 2012-03-24T18:49:19.157Z · LW(p) · GW(p)

Like this:
(fixed)

Replies from: orthonormal
comment by orthonormal · 2012-03-24T18:52:37.697Z · LW(p) · GW(p)

Thanks!

comment by Nisan · 2012-03-24T18:54:43.319Z · LW(p) · GW(p)

You should be able to link to just #utilities.

comment by Vladimir_Nesov · 2012-03-24T21:13:14.267Z · LW(p) · GW(p)

if your source code for X were released in advance

But everyone's source code is communicated at each round to everyone. What do you mean, "in advance"? What for?

The problem is that we have an exploitable vulnerability here: if your source code for X were released in advance, then other programmers could write a Gödel-numbering for X's inference module into their own programs in such a way that X has to take a circuitous route to the "genuine" counterfactuals (or so that the genuine ones come back "Unknown"), and in the meantime X will stumble upon exactly the spurious counterfactuals that help Y.

This is very speculative. While an agent can make predictions about itself as long as it wishes, thus making a natural counterfactual inference performed by the other agent longer, that makes the spurious counterfactuals longer as well. Wei asks in another thread:

Having f(L)=L+1 would be sufficient to protect against any "moral arguments" that involve first proving agent()!=a for some a. Do we have any examples of "spurious" proofs that are not of this form?

Unless an agent sabotages itself by deciding arbitrarily, and then observes spurious inferences from that decision, I don't see how to push spurious inferences in front of the natural ones.

I think this part of the post should be rephrased, so that it says that we don't know how to prove that even such bizarre exploitation strategy is impossible, instead of the current assertion (unless you have background knowledge or a new result that suggests that this conjecture is reasonable).

Edit: Turns out this is indeed possible, if X can be tricked into checking a specifically contrived proof out of order. Here's a post describing the setup.

Replies from: orthonormal
comment by orthonormal · 2012-03-24T23:01:43.107Z · LW(p) · GW(p)

But everyone's source code is communicated at each round to everyone. What do you mean, "in advance"? What for?

It's easier for me to imagine a programmer writing a program specifically to exploit mine than it is to imagine a program that's able to recognize an opportunity for such exploitation in general. Just as a Nash equilibrium strategy can't be exploited even if it's publicly known in advance, we'd like our decision theories to be secure from exploitation even if their source code were publicly known.

This is very speculative.

I agree; I just thought of it as I was writing the post, and in retrospect I should have asked the decision-theory mailing list before putting it in. But my intuition is that this sort of malice could be possible, using the equivalent of a Henkin sentence. I'll edit to make it clearer that this is speculation, though.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-03-24T23:15:15.131Z · LW(p) · GW(p)

But my intuition is that this sort of malice could be possible, using the equivalent of a Henkin sentence.

This sounds like a way of making a natural proof, not a spurious one... Could you explain in more detail?

Replies from: orthonormal
comment by orthonormal · 2012-03-24T23:23:51.516Z · LW(p) · GW(p)

Could you explain in more detail?

I had the intuition that one could both "push X away from genuine counterfactuals" using diagonalization and "pull X toward particular spurious counterfactuals" using specially constructed explicit Henkin sentences (which the inference module will find). But I'm more confident that the first half makes sense than that the second half does.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-03-24T23:29:56.991Z · LW(p) · GW(p)

"pull X toward particular spurious counterfactuals" using specially constructed explicit Henkin sentences (which the inference module will find)

Right, but sentences where? Why does our agent believe them? Can you give a sketch of an analogous example?

Replies from: orthonormal
comment by orthonormal · 2012-03-24T23:36:24.964Z · LW(p) · GW(p)

The inference module of X will at some point start trying to prove sub-problems about the code of Y if it doesn't find an easily proved counterfactual. In the source code of Y, in a place where X will apply its inference module, the programmer has included an explicit Henkin sentence (i.e. contains the ingredients for its own proof) such that when X applies its inference module, it quickly proves "if X!=a then U=-1000", which happens to be the sort of thing the NDT agent was looking for. This section of code doesn't affect Y's decision except when Y is playing a game of this sort against X.

Again, highly speculative.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-03-24T23:55:17.822Z · LW(p) · GW(p)

In the source code of Y, in a place where X will apply its inference module, the programmer has included an explicit Henkin sentence (i.e. contains the ingredients for its own proof) such that when X applies its inference module, it quickly proves "if X!=a then U=-1000"

I still don't understand what you're saying. What's the nature of this "Henkin sentence", that is in what way is it "included" in Y's source code? Is it written in the comments or something? What does it mean for X to "apply its inference module"? My only guess is that you are talking about X starting to enumerate proofs in order other than their length, checking the proofs that happen to be lying around in Y's code just in case, so that including something in Y's code might push X to consider a proof that Y wants it to consider earlier than some other proof. Is it what you mean?

How would "if X!=a then U=-1000" follow from a Henkin sentence (it obviously can't make anything it likes true)?

Replies from: orthonormal
comment by orthonormal · 2012-03-25T00:53:13.022Z · LW(p) · GW(p)

The mental image of an inference module that I'm trying to trick is something like a chess program, performing a heuristically biased depth-first search on the space of proofs, based on the source code of the round. If the first path it takes is a garden path to an easy but spurious counterfactual proof, and if the genuine counterfactual is unreachable at the same level (by some diagonalization of X's proof method), then X should be fooled. The question is whether, knowing X's source code, there's a Y which leads X's heuristics down that garden path.

Since in the end we'll achieve X=a, the statement "if X!=a then U=-1000" is true, and it should be provable by NDT if it finds the right path, so there should be some way of helping NDT find the proof more quickly.

I don't have a strong intuition for how to fool the "enumerate and check all proofs up to length N" inference module.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-03-25T01:14:07.510Z · LW(p) · GW(p)

Okay. Let's say we write an X that will start from checking any proof given to it (and accepting its conclusion). How can we construct a proof of X=a that X can read then? It looks like the more X reads, the longer the proof must be, and so there doesn't exist a proof that X can also read. I don't see how to construct a counterexample to this property without corrupting X's inference system, though I imagine some quining trick might work...

Replies from: orthonormal
comment by orthonormal · 2012-03-25T01:19:47.404Z · LW(p) · GW(p)

Right. That's why I acknowledge that this is speculative.

If it turns out there's really no need to worry about spurious counterfactuals for a reasonable inference module, then hooray! But mathematical logic is perverse enough that I expect there's room for malice if you leave the front door unlocked.

Replies from: Vladimir_Nesov, cousin_it
comment by Vladimir_Nesov · 2012-03-25T11:28:12.191Z · LW(p) · GW(p)

...and Slepnev posted a proof (on the list) that in my formulation, X can be successfully deceived. It's not so much a Henkin sentence, just a program that enumerates all proofs, looking for a particular spurious counterfactual, and doesn't give up until it finds it. If the spurious counterfactual is provable, the program will find it, and so the agent will be tricked by it, and then the spurious counterfactual will be true. We have an implication from provability of the spurious argument to its truth, so by Loeb's theorem it's true, and X is misled. So you were right!

comment by cousin_it · 2012-03-25T11:50:04.638Z · LW(p) · GW(p)

I just wrote up the proof Nesov is talking about, here.

comment by AlephNeil · 2012-03-24T19:33:57.695Z · LW(p) · GW(p)
  1. Actually, this is an open problem so far as I know: show that if X is a Naive Decision Theory agent as above, with some analyzable inference module like a halting oracle, then there exists an agent Y written so that X cooperates against Y in a Prisoner's Dilemma while Y defects.

Let me just spell out to myself what would have to happen in this instance. For definiteness, let's take the payoffs in prisoner's dilemma to be $0 (CD), $1 (DD), $10 (CC) and $11 (DC).

Now, if X is going to co-operate and Y is going to defect then X is going to prove "If I co-operate then I get $0". Therefore, in order to co-operate, X must also prove the spurious counterfactual "If I defect then I get $x" for some negative value of x.

But suppose I tweak the definition of the NDT agent so that whenever it can prove (1) "if output = a then utility >= u" and (2) "if output != a then utility <= u" it will immediately output a. (And if several statements of the forms (1) and (2) have been proved then the agent searches for them in the order that they were proved) Note that our agent will quickly prove "if output = 'defect' then utility >= $1". So if it ever managed to prove "if output = 'co-operate' then utility = $0" it would defect right away.

Since I have tweaked the definition, this doesn't address your 'open problem' (which I think is a very interesting one) but it does show that if we replace the NDT agent with something only slightly less naive, then the answer is that no such Y exists.

(We could replace Prisoner's Dilemma with an alternative game where each player has a third option called "nuclear holocaust", such that if either player opts for nuclear holocaust then both get (say) -$1, and ask the same question as in your note 2. Then even for the tweaked version of X it's not clear that no such Y exists.)

ETA: I'm afraid my idea doesn't work: The problem is that the agent will also quickly prove "if 'co-operate' then I receive at least $0." So if it can prove the spurious counterfactual "if 'defect' then receive -1" before proving the 'real' counterfactual "if 'co-operate' then receive 0" then it will co-operate.

We could patch this up with a rule that said "if we deduce a contradiction from the assumption 'output = a' then immediately output a" which, if I remember rightly, is Nesov's idea about "playing chicken with the inconsistency". Then on deducing the spurious counterfactual "if 'defect' then receive -1" the agent would immediately defect, which could only happen if the agent itself were inconsistent. So if the agent is consistent, it will never deduce this spurious counterfactual. But of course, this is getting even further away from the original "NDT".

Replies from: orthonormal
comment by orthonormal · 2012-03-24T20:40:11.580Z · LW(p) · GW(p)

Note that our agent will quickly prove "if output = 'defect' then utility >= $1".

Your intuition that it gets deduced before any of the spurious claims like "if output = 'defect' then utility <= -$1" is taking advantage of an authoritative payoff matrix that X can't safely calculate xerself. I'm not sure that this tweaked version is any safer from exploitation...

Replies from: AlephNeil
comment by AlephNeil · 2012-03-24T21:15:44.027Z · LW(p) · GW(p)

an authoritative payoff matrix that X can't safely calculate xerself.

Why not? Can't the payoff matrix be "read off" from the "world program" (assuming X isn't just 'given' the payoff matrix as an argument.)

Replies from: orthonormal
comment by orthonormal · 2012-03-24T23:29:57.347Z · LW(p) · GW(p)

The one-player game that I wrote out is an example of a NDT agent trying to read off the payoff matrix from the world program, and failing. There are ways to ensure you read off the matrix correctly, but that's tantamount to what you do to implement CDT, so I'll explain it in Part II.

comment by Nisan · 2012-03-24T19:01:22.130Z · LW(p) · GW(p)

I look forward to the rest of this sequence!

comment by Irgy · 2012-04-03T03:22:57.621Z · LW(p) · GW(p)

Well, this article puts my confusion from the primer into context, and I think I understand the issue now.

The "problem" (and it's ultimately not a problem, but I'll get to that) I have is that the game these programs are playing is ill-posed, in that it's not a closed system. It might appear as if they're playing, for example, Prisoners' Dilemna, but with access to each other's source code they're really playing some sort of Prisoners' Meta-Dilemna, a fundamentally different game for all that it might seem superficially similar. Now I'm not enough of an expert to tell you exactly what defines a "well posed" game, but I'm quite confident this is not it. Or maybe I'm abusing the terminology a bit but hopefully my point is clear enough (if not I'll elaborate). The distinction though is that the mechanism by which you make your decision on how to play should be outside the game itself, and therefore only observed indirectly.

One property this metagame has is that there is fundamentally no "correct" answer for what "decision theory" the programs should use. The proof is simple, imagine you have a correct answer D. Imagine then that this program faces a population of the following opponents (I'll use the Prisoners' Dilemna subgame as a simple example):

  • Look at your opponent's source code.
  • If your opponent is using D, defect.
  • Otherwise co-operate.

D is in fact the single worst strategy in this scenario by quite a margin, regardless of what it is.

But that's fine, why would we even expect there to be a single correct strategy for every possible set of opponents? Of course there isn't. This is where we take a step back to a game that is actually well posed and for which Naive Decision Theory works fine - the programmers' game. Applying it there would go something like this:

  • Create a probability distribution over the (potentially infinite) space of all possible programs that yours might be up against.
  • Find the program which maximises the expected utility against that distribution of opponents.

For an infinite space of possible programs the second step might technically be impossible to perform optimally, but you do the best you can. There is of course a devil in the details of the first step in how you create that distribution, and when you look into it ultimately it's a game between programmers, with its own Nash Equilibria and co-operative strategies and so on, not just a simple decision. But until you start having programmers read each others' minds (in which case you've really just pointlessly shifted the whole thing up one meta-level) it's at least a well-posed (if complex) game that the ordinary approaches should work for.

As I said earlier though the fact that I might call these games ill-posed doesn't mean they're not interesting. I'll be fascinated to read about the strategies that people have devised for them and the attempts to be as general as possible. I'm a little concerned by the fact that the arguments so far seem to be along the lines of "You could do X, but then what if Y happens?", purely because as I've shown above there is ultimately no end to that chain of logic. But maybe you'll eventually step outside of it. My guess is that ultimately the "smarter" program wins. Certainly my generic counter-strategy has to generally be more complex (and therefore "smarter"? Maybe) than the "D" strategy it punishes in order to identify "D" in the first place.

In any case I look forward to finding out.

Replies from: orthonormal, fubarobfusco
comment by orthonormal · 2012-04-08T16:25:32.399Z · LW(p) · GW(p)

There's a relevant footnote in the next post: while it's possible to write agents that punish a particular decision theory, an agent that's trying (in any meaningful sense) to maximize its stated payoffs won't do this, and it's certainly not a constant strategy (i.e. a one-player game) either. In that sense, we can say that such problems are less likely to be encountered in reality.

It might appear as if they're playing, for example, Prisoners' Dilemna, but with access to each other's source code they're really playing some sort of Prisoners' Meta-Dilemna, a fundamentally different game for all that it might seem superficially similar.

I agree, in the same way that Iterated Prisoners' Dilemma (with no source code access) is a fundamentally different game from One-Shot Prisoners' Dilemma (with no source code access). But there are real-life instances that are closer to the source-code access version than to the classical game theory version- for instance, a Cold War between two powers, each of which has a network of spies in the other's government.

comment by fubarobfusco · 2012-04-03T08:52:49.091Z · LW(p) · GW(p)

This is where we take a step back to a game that is actually well posed and for which Naive Decision Theory works fine - the programmers' game. Applying it there would go something like this:

  • Create a probability distribution over the (potentially infinite) space of all possible programs that yours might be up against.
  • Find the program which maximises the expected utility against that distribution of opponents.

... I believe this is called AIXI.

(Not exactly, but ...)

comment by drnickbone · 2012-03-27T19:30:29.499Z · LW(p) · GW(p)

I suspect the difficulty you've hit here is that Eliezer's theory (TDT and variants) involves consistent reasoning about counterpossible statements, not just counterfactual statements ("If my algorithm were to pick a in this situation, then my expected utility would be -10000000, so I won't do that then"). I can see what he's trying to achieve, but unfortunately I don't know any way in standard Mathematics to reason consistently about inconsistencies, or (in Modal Logic) to analyze impossible possible worlds. So this looks like a major problem.

I'm also struck by the similarity to an argument for 2-boxing in Newcomb's problem. Basically, the causal decision theorist knows he is a causal decision theorist, and knows that Omega knows that, so he proves a theorem to the effect that the opaque box won't contain $1 million. And then having proved that, it's blindingly obvious that he should 2-box (it's $1000 or nothing). So he 2 boxes. Nothing inconsistent there, just tragic.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-03-27T21:50:10.244Z · LW(p) · GW(p)

So this looks like a major problem.

But one that seems to have multiple "lines of attacks". Have you seen cousin_it's recent posts in discussion for example?

Basically, the causal decision theorist knows he is a causal decision theorist

What do you mean by "causal decision theorist"? Is it:

  1. A decision theorist (somebody who studies decision theory) who, based on the arguments he has seen so far, thinks CDT is the right decision theory? Or
  2. An AI that has access to its own source code and can see that it's running CDT?
Replies from: drnickbone
comment by drnickbone · 2012-03-28T22:42:16.178Z · LW(p) · GW(p)

I hadn't seen the cousin_it posts, thanks, though I'm reading them now.

One observation is that I thought myself of the "tweak" whereby if TDT, UDT or similar manages to prove that it will not perform an action a, then it immediately does perform an action a.

This at least prevents a sound decision theory finding two distinct proofs of the form "If my algorithm were to do a, then my utility would be 1000" and "If my algorithm were to do a, then my utility would be -1000". However, a couple of reservations:

  1. The tweak sounds pretty weird (if I'm president, and I prove to myself that I won't push the nuclear button, then suddenly I will. Huh???)

  2. If I'm trying to reason about counterpossibles (or impossible possible words) then the usual laws of logic and proof just fly out of the window. I can't legitimately infer from A -> B and A-> ~B that ~A. Indeed it's not obvious that I can infer anything at all; it will depend on how badly my impossible worlds behave. Some models of impossible worlds are complete anarchy, with no logical relationships at all between propositions.

On the question about CDT, I hadn't really thought about whether the agent was human or automated, or whether it has access to its own source-code or just a general self-awareness of its decision-making powers. To be honest, I don't think it makes much difference; human agents who are utterly convinced of the correctness of CDT (and there are some of them) will reach much the same conclusion as the automated agent (I.e. that there just isn't going to be $1 million in the opaque box, so don't worry about it). And when noticing other agents who seem to be getting the $1 million, they'll shrug and say "Oh dear, Omega is awarding irrationality, but so what? Even if I had switched to TDT or UDT instead of CDT, there would still be some possible Omega who'd penalise me for that switch, and I had no way of knowing in advance which Omega I was likely to meet; in fact I was pretty confident I wouldn't meet any Omega ever. Can't win them all..."

Replies from: Vladimir_Nesov, Vladimir_Nesov
comment by Vladimir_Nesov · 2012-03-29T18:44:45.422Z · LW(p) · GW(p)

If I'm trying to reason about counterpossibles (or impossible possible words) then the usual laws of logic and proof just fly out of the window. I can't legitimately infer from A -> B and A-> ~B that ~A.

In our models so far, this isn't a problem, you just use a factory-standard first order inference system. What do you mean by "can't legitimately infer"? The worst that can happen is that you infer something misleading (but still valid), and the diagonal step/chicken rule is one way of ensuring that doesn't happen.

comment by Vladimir_Nesov · 2012-03-29T18:41:26.139Z · LW(p) · GW(p)

I hadn't seen the cousin_it posts, thanks, though I'm reading them now.

More specifically, see the last 4 posts linked from the ADT wiki page:

Replies from: drnickbone
comment by drnickbone · 2012-03-30T00:37:50.136Z · LW(p) · GW(p)

Thanks for the pointers to these... I'll probably move my comments to there. On your other point:

In our models so far, this isn't a problem, you just use a factory-standard first order inference system.

and...

If I'm trying to reason about counterpossibles (or impossible possible words) then the usual laws of logic and proof just fly out of the window. I can't legitimately infer from A -> B and A-> ~B that ~A.

The inference rule A-> B, A->~B |- ~A is the familiar principle of proof by contradiction i.e. "reductio an absurdam" of the alternative. The difficulty I see is that you can't really use reduction ad absurdam in an impossible worlds semantics, because the impossible worlds are - ahem - absurd. That's kinda the point.

I haven't read all the details of what you're trying, but my suspicion is that using standard off-the-shelf logics won't work here: you will at least need to limit the application of the inference rules e.g. so that they can be applied only up to a maximum finite number of times (and so the set of theorems is not closed under logical inference). Maybe you need some sort of belief semantics, where the "worlds" admitted into the belief-system are strictly a mixture of possible and impossible worlds, but the agent hasn't realized yet that the impossible ones actually are impossible or will never realize it, so they "seem" to be possible. So he's chugging along analysing those worlds using inference rules that will eventually hit a contradiction, but not yet. For "practical purposes" the system is consistent and won't blow up. Though be very careful, because an inconsistent system with the rule ("If I prove I won't do a, then do a") could accidentally prove that it won't nuke the planet and then (inconsistently) nuke the planet. Oops.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-03-30T20:12:49.621Z · LW(p) · GW(p)

my suspicion is that using standard off-the-shelf logics won't work here: you will at least need to limit the application of the inference rules e.g. so that they can be applied only up to a maximum finite number of times (and so the set of theorems is not closed under logical inference)

This happens automatically, because (1) only the statements contributing to the decision matter, and there's a finite number of them, and (2) presence of the things like the diagonal step/chicken rule in the decision algorithm implies that inferring of absurdity doesn't happen. So we can prove that it's not the case that an agent can infer absurdity, even though it's free to use any first order inference it wants, and even though absurdity does follow from the axioms (in the setting without provability oracle).

In the setting with provability oracle, agent's algorithm is constructed in such a way that its axioms become sufficiently weak that the impossible counterfactuals (from the point of view of a stronger theory) remain consistent according to the theory used by the agent, and so in that setting the impossible worlds have actual models.

comment by Kenoubi · 2012-03-24T21:50:15.361Z · LW(p) · GW(p)

Your self-fulfilling prophecy example works for the iteration of the for loop (described in "For each xi, assume the output of X is xi, and try to deduce the expected value of U.") in which the output is assumed to be a, but for the iteration in which the output is assumed to be b, proving that the output is a would be to prove a contradiction. "if (output X)=b then U=0" is one possible outcome, but U could also equal anything else.

I don't see how the NDT algorithm as given allows "(output X)=a" to be proved outside of the for loop at all. I would think it would take (output X)=whatever for each iteration through the for loop as a given before trying to prove anything, in which case in the run of the for loop in which (output X)=b is the given, proving (output X)=a is a clear contradiction, one which I would think our prover could avoid unless our axiomatic system is contradictory in the first place.

Or to rephrase, I don't think "For each xi, assume the output of X is xi, and try to deduce the expected value of U." and "(That is, try and deduce statements of the form "if (output X)=xi then U=ui" for some ui)." are actually equivalent at all, and I think the self-fulfilling prophecy example follows the second and ignores the first.

Replies from: Dmytry
comment by Dmytry · 2012-03-24T22:05:59.670Z · LW(p) · GW(p)

The point is that 'given (output X)=a' may eventually let you prove a contradiction when the output is not, in fact, a, and you have added a false statement as input.

In practice, one does not use a two-way axiom of (output X)=a but an one-way substitution rule 'replace (output X) with a' . The rule may be applied once at start, or through first N steps of deduction process (to catch the cases where deduction manages to deduce X from a slightly modified X that was included inside the 'other algorithm' ; note that one can't do it forever because at some point the proof checker arrives at X from first principles, and the contradiction can be introduced by substitution.

The issue he described is specific to using (outputs X)=a as given, which allows you to e.g. do some algebra, arrive at a number a for any reason, and then replace a with (output X) , which lets you contradict yourself, or make a self fulfilling prophesy. The intent of making it a given, is to make the substitutions one way, but the theorem prover can do substitutions other way around.

Replies from: Kenoubi
comment by Kenoubi · 2012-03-24T23:57:47.990Z · LW(p) · GW(p)

Here's what I think you're saying: there is one value that will actually be output, call it o. In every iteration of the for loop except the one where you assume the output is o, you have assumed a false statement. From this contradiction you should be able to derive anything, and in particular, derive U(this choice)=some large negative number, such that o will appear to be the best choice. Furthermore, this argument makes no reference to what o actually is, so the algorithm can output any choice this way.

That's a very good argument, although I never would have figured it out from the article and it took some thinking to get it from your comment. I think it proves that the algorithm is underspecified though, not (necessarily) faulty; the description given is not enough to actually figure out what the algorithm will output.

As for the rest of your comment, I think by "in practice" you mean "in decision theories other than NDT which work better"?

Replies from: Dmytry
comment by Dmytry · 2012-03-25T03:58:43.714Z · LW(p) · GW(p)

re: the argument, not mine, they explained it somewhere else, albeit it is hard to get.

By in practice, I mean in anything that haven't got computing power substantially larger than that of the universe. You don't feed the tail into the dragon when your dragon has finite mouth and long tail, that is to say, you dont feed source code into theorem provers if you don't have to; instead you actively try to simplify the stuff that is being processed by theorem prover. You don't make equivalence of your code with provisional output an axiom, you make it a substitution rule that works one way (it still makes nonsense eventually, but easily long after all the stars in the universe have burnt themselves out, and that can too be fixed). That is assuming you are using state of the art automated algebra package to implement decision theory.

comment by Vaniver · 2012-03-24T16:51:07.275Z · LW(p) · GW(p)

The programs don't get to carry any memories from round to round, or modify their source code at any stage.

! No memory is a big hole to leave in your explanation. It also destroys your ability to do inference, i.e. idea 2.

Overall, it looks like for NDT you don't want to have access to your opponent's source code, and you do want to have memory. You don't need to infer if you can read!

Replies from: orthonormal
comment by orthonormal · 2012-03-24T18:23:35.458Z · LW(p) · GW(p)

What you're thinking of seems to be EDT, not NDT.

I warned at the beginning that this was an artificial setting, but it's one in which we can express and distinguish CDT, TDT, ADT, UDT and others. The lack of memory serves to focus attention on what we can do; note that it's perfectly legitimate to try and deduce what Y would have done against another opponent, if you want that kind of data.

Replies from: Vaniver
comment by Vaniver · 2012-03-24T23:09:09.478Z · LW(p) · GW(p)

It's a lot easier for me to see the applications for an iterated game where you try to model your opponent's source code given the game history than a single-shot game where you try to predict what your opponent will do given their source code and your source code. That seems like it just invites infinite regress.

Suppose we know we're going to play the one-shot prisoner's dilemma with access to source code, and so we want to try to build the bot that cooperates only with bots that have functionally the same rule as it, i.e. the 'superrational prisoner' that's come up on LW. How would we actually express that? Suppose, for ease of labeling, we have bots 1 and 2 that can either C or D. Bot 1 says "output C iff 2 will output C iff 1 will output C iff 2 will output C iff..." Well, clearly that won't work. We also can't just check to see if the source code is duplicated, because we want to ensure that we also cooperate with similar bots, and don't want to get fooled by bots that have that commented out.

Is there a way to code that bot with the sort of propositions you're doing? Outcomes aren't enough- you need models of outcome dependencies that can fit inside themselves.

Replies from: Vladimir_Nesov, cousin_it, orthonormal
comment by Vladimir_Nesov · 2012-03-24T23:45:13.812Z · LW(p) · GW(p)

Well, that's described in these posts, or do you mean something else?

Replies from: Vaniver
comment by Vaniver · 2012-03-25T07:09:58.242Z · LW(p) · GW(p)

That is what I meant. My problem is that I don't see how to express that strategy correctly with the vocabulary in this post, and I don't understand the notation in those posts well enough to tell if they express that idea correctly.

For example, I'm not comfortable with how Eliezer framed it. If my strategy is "cooperate iff they play cooperate iff I play cooperate," then won't I never cooperate with myself, because I don't have the strategy "cooperate iff they play cooperate"?

(I do think that general idea might be formalizable, but you need models of outcome dependencies that can fit inside themselves. Which maybe the notation I don't understand is powerful enough to do.)

Replies from: Douglas_Knight
comment by Douglas_Knight · 2012-03-25T18:54:18.865Z · LW(p) · GW(p)

"Cooperate iff they play cooperate iff I play cooperate" is imprecise. Your complaint seems to stem from interpreting it to pay too much attention to the details of the algorithms.* One reason to use counterfactuals is to avoid this. One interpretation of the slogan is "A()=C iff A()=B()." This has symmetry, so the agent cooperates with itself. However, it this is vulnerable to self-fulfilling prophecy against the cooperation rock (ie, it can choose either option) and its actions against the defection rock depend on implementation details. It is better to interpret it as cooperate if both counterfactuals "A()=C => B()=C" and "A()=D => B()=D" are true. Again, this has symmetry to allow it to cooperate with itself, but it defects against rocks. In fact, an agent like from this post pretty much derives this strategy and should cooperate with an agent that comes pre-programmed with this strategy. The problem is whether the counterfactuals are provable. Here is a thread on the topic.

* Another example of agents paying too much attentions to implementation details is that we usually want to exclude agents that punish other agents for their choice of decision theory, rather than their decisions.

Replies from: Vaniver
comment by Vaniver · 2012-03-26T01:09:18.477Z · LW(p) · GW(p)

Aren't the implementation details sort of the whole point? I mean, suppose you code a version of cooperate iff "A()=C => B()=C and A()=D => B()=D" and feed it a copy of itself. Can it actually recognize that it should cooperate? Or does it think that the premise "A()=C" implies a cooperation rock, which it defects against?

That is, if the algorithm is "create two copies of your opponent, feed them different inputs, and combine their output by this formula," when you play that against itself it looks like you get 2^t copies, where t is how long you ran it before it melted down.

I see how one could code a cliquebot that only cooperated with someone with its source code, and no one else. I don't see how one could formalize this general idea- of trying D,C then C,C then D,D (and never playing C,D)- in a way that halts and recognizes different implementations of the same idea. The problem is that the propositions don't seem to cleanly differentiate between code outcomes, like "A()=C", and code, like "A()=C => B()=C".

Replies from: Douglas_Knight
comment by Douglas_Knight · 2012-03-26T02:36:46.587Z · LW(p) · GW(p)

No, the algorithm is definitely not to feed inputs in simulation. The algorithm is to prove things about the interaction of the two algorithms. For more detail, see the post that is the parent of the comment I linked.

"A()=C => B()=C" is a statement of propositional logic, not code. It is material implication, not conditional execution.

Replies from: Vaniver
comment by Vaniver · 2012-03-26T03:46:55.138Z · LW(p) · GW(p)

The algorithm is to prove things about the interaction of the two algorithms.

This is possibly just ignorance on my part, but as far as I can tell what that post does is take the difficult part and put it in a black box, which leaves me no wiser about how this works.

"A()=C => B()=C" is a statement of propositional logic, not code. It is material implication, not conditional execution.

Code was the wrong phrase for me to use- I should have gone with my earlier "outcome dependency," which sounds like your "material implication."

I think I've found my largest stumbling block: when I see "B()=C", it matters to me what's in the (), i.e. the inputs to B. But if B is a copy of this algorithm, it needs more information than "A()=C" in order to generate an output. Or, perhaps more precisely, it doesn't care about the "=C" part and makes its decision solely on the "A()" part. But then I can't really feed it "A()=C" and "A()=D" and expect to get different results.

That is, counterfactuals about my algorithm outputs are meaningless if the opponent's algorithm judges me based on my algorithm, not my outputs. Right? Or am I missing something?

Replies from: Douglas_Knight
comment by Douglas_Knight · 2012-03-26T04:41:40.067Z · LW(p) · GW(p)

The input to A and B is always the same, namely the source code of A and B and the universe. In fact, often we consider all that to be hard coded and there to be no input, hence the notation A().

Could I rephrase your last paragraph as "counterfactuals about the output of my algorithm are meaningless since my algorithm only has one output"? Yes, this is a serious problem in making sense of strategies like "cooperate iff the opponent cooperates iff I do." Either my opponent cooperates or not. That is a fact about math. How could it be different?

And yet, in the post I linked to, Cousin It managed to set up a situation in which there are provable theorems about what happens in the counterfactuals. Still, that does not answer the question of the meaning of "logical counterfactuals." They seems to match the ordinary language use of counterfactuals, as in the above strategy.

If the "black box" was the provability oracle, then consider searching for proofs of bounded lengths, which is computable. If the black box was formal logic (Peano arithmetic), well, it doesn't look very black to me, but it is doing a lot of work.

I don't think you should be judging material implication by its name. Perhaps by "outcome dependency" you mean "logical counterfactual." Then, yes, counterfactuals are related to such implications, by definition. The arrow is just logical implication. There are certain statements of logic that we interpret as counterfactuals.

Replies from: Vaniver
comment by Vaniver · 2012-03-26T05:25:05.489Z · LW(p) · GW(p)

Could I rephrase your last paragraph as "counterfactuals about the output of my algorithm are meaningless since my algorithm only has one output"?

That looks right to me.

Either my opponent cooperates or not. That is a fact about math. How could it be different?

What do you mean by the first sentence? That the opponent's move is one of {C, D}? Or that we are either in the world where C is played with probability 1, or the world where D is played with probability 1?

It seems to me that pernicious algorithms could cause cycling if you aren't careful, and it's not specified what happens if the program fails to output an order. If you tack on some more assumptions, I'm comfortable with saying that the opponent must play either C or D, but the second seems outlawed by mixed strategies being possible.

If the "black box" was the provability oracle, then consider searching for proofs of bounded lengths, which is computable. If the black box was formal logic (Peano arithmetic), well, it doesn't look very black to me, but it is doing a lot of work.

Peano arithmetic is beyond my formal education. I think I understand most of what it's doing but the motivation of why one would turn to it is not yet clear.

I don't think you should be judging material implication by its name.

So, "A()=C => B()=C" means "it cannot be the case that I cooperate and they defect" and "A()=D => B()=D" means "it cannot be the case that I defect and they cooperate," and if both of those are true, then the algorithm cooperates. Right? (The first is to not be taken advantage of, and the second is to take advantage of those who will let you.)

What I'm not seeing is how you take two copies of that, evaluate them, and get "C,C" without jumping to the end by wishful thinking. It looks to me like the counterfactuals multiply without end, because every proof requires four subproofs that are just as complicated. (This is where it looks like the provability oracle is being called in, but I would rather avoid that if possible.)

Replies from: Douglas_Knight
comment by Douglas_Knight · 2012-03-26T07:30:14.498Z · LW(p) · GW(p)

Yes, there are complications like programs failing to terminate or using random numbers. For simplicity, we usually assume no random numbers. We could force termination by putting a time bound and some default action, but it's usually better to distinguish the such program failure from other actions. In any event, at least one of the possibilities A()=C and A()=D is false, and thus thus claims like "A()=C => B()=C" are confusing.

If A and B have the same source code, then it is provable that they have the same source code and thus it is provable that A()=B(), and thus provable that A()=C => B()=C. That is what A and B do, is search for that proof. They do not simulate each other, so there is no multiplication of counterfactuals. Identical source code is a rather limited case, but it is a start. Loeb's theorem shows how to prove things in some other cases.

Replies from: Vaniver
comment by Vaniver · 2012-03-27T04:59:51.701Z · LW(p) · GW(p)

That is what A and B do, is search for that proof.

Ok. Where do they look for it, and how will they know if they've found it?

I don't like, but will accept for now, the "evaluate every possible proof less than X characters" method of finding proofs.

But I don't see how you determine those proofs are true or false without simulating A() and B(), especially if B() isn't a copy of A(), but some complicated algorithm that might or might not cash out as equivalent to A().

(Where I'm going with this: if this idea requires magic to do its basic operation, then I am uncomfortable with using this idea for anything.)

Replies from: Douglas_Knight
comment by Douglas_Knight · 2012-03-27T07:38:31.292Z · LW(p) · GW(p)

Very often in this conversation, I think we're using words to mean wildly different things; such as "proof." Do you need to simulate bubble sort to prove that it takes quadratic time?

Replies from: Vaniver
comment by Vaniver · 2012-03-27T18:41:30.301Z · LW(p) · GW(p)

Very often in this conversation, I think we're using words to mean wildly different things; such as "proof."

Very possibly! "Simulate" is another word that we may not be using the same way.

Do you need to simulate bubble sort to prove that it takes quadratic time?

I would say 'yes, if', conditioned on what you mean by simulate. It doesn't look like you can prove bubble sort takes quadratic time by running it a bunch of times, though you could certainly suggest that it does. But I also don't see how to prove bubble sort takes quadratic time without intense understanding of the algorithm (and it may be that intense understanding requires simulation).

If you're given the algorithm to begin with, this doesn't seem like a big issue. But if I give you some code, which is an implementation of bubble sort but not labeled as such, then proving it takes quadratic time is more involved, and may (will?) require running the code. Right?

Replies from: Douglas_Knight, fubarobfusco
comment by Douglas_Knight · 2012-03-27T21:06:38.925Z · LW(p) · GW(p)

It is certainly true that some algorithms are easier to prove things about than others. Let's set aside running time and just talk about correctness of sorting. There are algorithms that sort but are undecidable; or for which the proof is very long. But the proof that bubbling correctly sorts is short, even intercal bubbling. Finding a short proof quickly may be hard, but exhaustively searching for it is simple to implement and simple to prove things about. And once you've found it, you just verify it. You don't have to simulate the code to verify the proof. You certainly don't have an infinite regress.

If you like, you can say that by invoking exhaustive searching, we are assuming P=NP and assuming away the problem of AI. That is intentional. The question of decision theory is an independent problem and it is right to separate them this way. But exhaustive search is a concrete computable function, not "magic." Also, sometimes you really are constrained by space, not time and so exhaustive search is a realistic option.

Putting in a proof generator makes it easy to make an algorithm undecidable. If we make these deterministic algorithms play Rock-Paper-Scissors, they will fail to terminate or otherwise break down. Many similar algorithms with resource constraints will be undecidable to other algorithms with the same resource constraints. But proofs are not particularly hard to reason about and do not force undecidability / infinite regress. In particular, when the game is PD and the algorithms are trying to cooperate to be understood and reach (C,C) rather than (D,D). Then (sometimes) there are short proofs of the relevant things and they do manage to cooperate.

Replies from: Vaniver
comment by Vaniver · 2012-03-28T01:25:18.542Z · LW(p) · GW(p)

And once you've found it, you just verify it. You don't have to simulate the code to verify the proof. You certainly don't have an infinite regress.

I still don't see why this is the case, but I think I understand everything else about it, which suggests I should let this sit for a while and it'll become clear when it's relevant. (The proof verification is where it looked like magic was happening, not finding the interesting proofs.)

If you like, you can say that by invoking exhaustive searching, we are assuming P=NP and assuming away the problem of AI. That is intentional. The question of decision theory is an independent problem and it is right to separate them this way.

I think I disagree that it's right to separate them this way, but this may just be a matter of taste / goals. I am far more interested in prescriptive decision theory than normative decision theory (bounded rationality vs. unbounded rationality).

Replies from: Douglas_Knight
comment by Douglas_Knight · 2012-03-28T04:09:06.560Z · LW(p) · GW(p)

I considered saying that proofs are verifiable by definition. I mean a proof in formal logic. The point of formal logic is that it is computable (even fast) to verify it. The point, which matches the common usage, is that it is convincing. You can't prove anything about arbitrary code; these algorithms are not guaranteed to understand the opponent. But it's interesting that sometimes they do succeed in proving things about their opponents.

comment by fubarobfusco · 2012-03-27T23:52:15.968Z · LW(p) · GW(p)

If you're given the algorithm to begin with, this doesn't seem like a big issue. But if I give you some code, which is an implementation of bubble sort but not labeled as such, then proving it takes quadratic time is more involved, and may (will?) require running the code. Right?

I'd say it's easier to prove that a naïve bubble sort runs in quadratic time, than it is to prove that when it terminates, the list is actually sorted. It's obvious by inspection that it contains a nested loop over the same data structure.

comment by cousin_it · 2012-03-25T00:04:34.219Z · LW(p) · GW(p)

What Nesov said. Loebian cooperation happened because I wanted to make quining cooperation work when the opponent has the same algorithm, but not necessarily the same source code. Illusion of transparency again, I post a lot of math thinking that its motivation will be crystal clear to everyone...

Replies from: Vaniver
comment by Vaniver · 2012-03-25T07:11:45.332Z · LW(p) · GW(p)

Combining branches: check out this comment.

comment by orthonormal · 2012-03-24T23:19:39.191Z · LW(p) · GW(p)

You'll see by Part III. (Hint: when you see an infinite regress, consider a clever quining.)

Replies from: Vaniver
comment by Vaniver · 2012-03-25T07:13:07.871Z · LW(p) · GW(p)

Ok. I see how to do quining to only cooperate with copies of your source code, but I don't yet see how to do quining with outcome dependencies.

Replies from: orthonormal
comment by orthonormal · 2012-03-28T01:34:24.711Z · LW(p) · GW(p)

By the way, I was too blasé in the grandparent comment. I have a model of TDT that does almost the right thing, but I haven't figured out how to quine it so that it does exactly the right thing. (Technically, I can't be sure it'll still work if I do get the right quine in, but my gut feeling is that it will.)

Replies from: Vaniver, wedrifid
comment by Vaniver · 2012-03-28T04:44:34.581Z · LW(p) · GW(p)

So, I'm going to be pretty disappointed if this whole affair is just someone inventing a meta-cliquebot that's actually just a cliquebot.

Replies from: orthonormal
comment by orthonormal · 2012-03-28T14:15:01.397Z · LW(p) · GW(p)

Trust me, there's better stuff than that. (In particular, I've got a more nicely formalized write-up of UDT. It's just TDT I'm having issues with.)

comment by wedrifid · 2012-03-28T01:43:00.521Z · LW(p) · GW(p)

I haven't figured out how to quine it so that it does exactly the right thing.

The quining part usually seems to be the tricky part doesn't it?

comment by selylindi · 2012-03-27T18:26:50.620Z · LW(p) · GW(p)

This branch of math is outside my training. I'm stumbling over the self-fulfilling prophecies section.

How can these two statements

if (output X)=b then U=10
if (output X)=b then U=0

both be true?

Replies from: orthonormal, ArisKatsaris
comment by orthonormal · 2012-03-27T23:41:44.840Z · LW(p) · GW(p)

Because in the second example, it's been deduced that (output X)=a. It's like how you can prove anything from a false premise.

Replies from: jsalvatier
comment by jsalvatier · 2012-03-28T19:10:43.100Z · LW(p) · GW(p)

I think it might help to say that explicitly.

Replies from: orthonormal
comment by orthonormal · 2012-03-28T23:44:31.650Z · LW(p) · GW(p)

Good call. Is my edit better?

Replies from: jsalvatier
comment by jsalvatier · 2012-03-29T00:57:31.757Z · LW(p) · GW(p)

Yes, though I would say "because you can prove anything from a false premise".

Replies from: orthonormal
comment by orthonormal · 2012-03-29T21:02:59.100Z · LW(p) · GW(p)

Subtle distinction: it's not unconditionally taking a false axiom and deriving a spurious conclusion, it's proving a conditional by proving the antecedent is false.

I'll see if I can improve the wording.

comment by ArisKatsaris · 2012-03-27T18:41:10.340Z · LW(p) · GW(p)

These articles seem beyond my skillset also, but this may help you:

In math, the sentence "if A then B", is equivalent to "(not A) or B"
So, "if A then B" is considered true if "A" is false, regardless of what B is.

comment by Douglas_Knight · 2012-03-26T05:03:19.245Z · LW(p) · GW(p)

For footnote #1, it may be useful to give a couple of examples. When both programs are making decisions based on the others source code, it is pretty close to the undecidable situation, but it depends on the game.

When the game is Rock-Paper-Scissors (or parity), the programs have contradictory goals and are pretty much the textbook example of noncomputable.

But if the game is the Prisoner's Dilemma (or the Coordination Game), they are trying to cooperate to land on (C,C), so they want to be decidable.

comment by orthonormal · 2012-03-24T16:43:51.581Z · LW(p) · GW(p)

I plan to put all the acknowledgments and references in the last part; but I should mention now that I first read about the self-fulfilling prophecy problem on the decision-theory mailing list, in a post by Vladimir Slepnev. (EDIT: He passes on the credit to Benja Fallenstein in this LW comment.)

Other people have used "naive decision theory" before, some of them on Less Wrong, but none of the usages seemed to stick. So I called this one (which was an early candidate for UDT before the problem was noticed) Naive Decision Theory. I can change the name if people prefer.

Replies from: orthonormal, orthonormal, cousin_it, orthonormal
comment by orthonormal · 2012-03-24T16:45:25.791Z · LW(p) · GW(p)

Vote up this comment if I should stick with calling it Naive Decision Theory.

comment by orthonormal · 2012-03-24T16:45:01.371Z · LW(p) · GW(p)

Vote up this comment if I should change the name of Naive Decision Theory. Reply with a suggestion or upvote one of the suggestions that's been made.

Replies from: Dmytry
comment by Dmytry · 2012-03-24T19:42:50.058Z · LW(p) · GW(p)

I propose: Very Naive Decision Theory, as it represents an effort to make a decision theory which is maximally naive by making an assumption that the makers of naive decision theory are naive about the Godel's incompleteness theorem and Halting problem, yet allow their decision theory to recurse rather than simply black-box it from itself. In practice, the black boxing is always done to cut down on the work because you are an applied mathematician, and you got a problem to solve, and you won't enter the recursion if you can avoid doing that.

comment by cousin_it · 2012-03-24T23:50:07.432Z · LW(p) · GW(p)

I first read about the self-fulfilling prophecy problem on the decision-theory mailing list, in a post by Vladimir Slepnev

As far as I know, the problem of self-fulfilling prophecies and spurious counterfactuals was first pointed out by Benja Fallenstein in the comments to one of my posts.

comment by orthonormal · 2012-03-24T16:46:16.574Z · LW(p) · GW(p)

KARMA BALANCE: Call it "This Must Be The Mathematical Universe (Naive Decision Theory)"!

comment by Dmytry · 2012-03-24T17:59:20.048Z · LW(p) · GW(p)

Gödel's First Incompleteness Theorem and the Halting Problem both imply that it's impossible to write a program that correctly deduces in general1 the output of arbitrary other programs. So we have to be prepared for our inference module to fail sometimes.

And here you fail the very basic thing, the domain of applicability: limited computational power. Unlimited computational power introduces nasty infinities into things. "Naive decision theory" is not interested in games where you confuse yourself using infinities. The contest organizers have to be able to know output of any program, to be able to score! The contest should not be run by a halting oracle. Each program has to terminate in N cpu cycles. That's how contests are done. Nobody who comes up with naive decision theory really cares about contests run by a halting oracle.

Let's make it a little more practical: finite time and space. You can run any other programs, but with a caveat: you may run out of time. You will have heuristics to optimize the code before running it, which will not be able to work on optimized code, and for which you'll know exactly how many cycles your optimizer has shaved off, so you know if you have to abort, or not (assuming there is no penalty for non-terminating in time if other program did not terminate in time either). You also recognize any instances of the self, for which you know their output equals your output, and you recognize instances of self as well as the code that runs instances of you.

Then you proceed with this 'prove outputs(X) = a' . This is something out of some TDT or the like. NDT does not do this kind of stuff. You run in limited space and time; your optimizer does not work on your own source code (due to inclusion of the optimizer in your source code), and the only way you could prove that you output a, is for you to run yourself and find yourself outputting a, which you can't do because this way you enter infinite recursion and run out of cycles. You can't just drop in, 'okay what if you prove this'. What if you prove 1=2 ? You will end up proving all numbers are equal by 1+1=2+1 and so on.

On top of this, while you assume that you output xi, you do not assume that you are utility optimizer any more.

It's really easy to confuse oneself with abstractions, without seeing the trees in the forest. You need to step by step, with actual proof, to show that the programs will end up proving that they output something.

edit: ahh, and another thing: you don't quite assume (outputs X) = xi in this way. I.e. you don't put your source into left side of the statement; you black-box yourself. You assume your output is a, and you try different values of a and then calculate payoff, without assuming a maximizes utility. You also, whenever you see a copy of yourself, substitute a.

Replies from: orthonormal, orthonormal, handoflixue
comment by orthonormal · 2012-03-24T18:40:32.109Z · LW(p) · GW(p)

Also, your intuitions for how to do decision theory are an unformalized version of something like CDT or TDT, as you'll see soon enough. [snark deleted by author]

comment by orthonormal · 2012-03-24T18:19:42.098Z · LW(p) · GW(p)

I never assumed that the inference module was infinitary- in fact, I said that it should be designed to return either a correct answer or "Unknown".

The very first example that was used for the self-fulfilling prophecy problem was that of a bounded proof-checker, which looks through all proofs shorter than length N. Finitary vs. infinitary is not the issue here.

Replies from: Dmytry
comment by Dmytry · 2012-03-24T18:36:56.746Z · LW(p) · GW(p)

edit: okay, that's just argument over the definition of NDT.

Then don't refer to Halting problem and Godel's incompleteness theorem. It is hard to object on the very informal things; all I can say, 'you didn't prove this', but it is a fully general counter argument to informal things.

The other misinterpretation is that NDT does not make assumption (outputs X) = xi in this 'transparent' way with source code on the left side, and take this as an axiom. It has a black box for itself, denoted by symbol A, which it substitutes for own source code whenever it spots it's own source code. It substitutes different xi as A into resulting utility equations, solving for utility, and finding the maximum utility. It does not analyze itself.

I do agree that for ONDT (orthonormal's interpretation of NDT) which you present here, what you say, is correct. I do not agree that your NDT is the NDT that applied mathematicians use. There are substantial differences.

Replies from: orthonormal
comment by orthonormal · 2012-03-24T19:56:13.700Z · LW(p) · GW(p)

If you were substituting in variables for the output of X rather than analyzing the round as a whole, then you're not talking about Naive Decision Theory, you're talking about something in the family of CDT and TDT.

Replies from: Dmytry
comment by Dmytry · 2012-03-24T20:04:26.044Z · LW(p) · GW(p)

TDT was made by Eliezer, right? Here, a bright guy (maybe naive, but certainly not stupid) outside the field's theory.

Replies from: orthonormal
comment by orthonormal · 2012-03-24T20:10:36.288Z · LW(p) · GW(p)

I don't understand your comment.

EDIT: Maybe you assumed that I was denigrating your intuitions on decision theory for departing from NDT? If so, that's not the case- substituting for the output of X in various spots turns out to be a good way to avoid the problem of spurious counterfactuals. CDT does this in a very basic way, TDT in a better way.

Replies from: Dmytry, Dmytry
comment by Dmytry · 2012-03-24T20:37:36.996Z · LW(p) · GW(p)

re: EDIT

Ahh, I see.

What I am saying is that the decision theory which applied mathematicians follow, operates under practical constraints (limited computing time), and this prevents introduction of very fancy things like your (output X) simply because they are computationally heavy. The theory that always substitutes is what I originally noted in the other thread, regarding the newcomb's. Due to substitution, that theory doesn't 'pollute' it's own proof checker with a proposition that may be untrue and may break the proof checker (but practically, would make proof checker very slow).

edit: That being said, the decision theories operate under notion of perfect accuracy and unlimited computing time; the applied mathematicians see this field as not very relevant to any practical software (e.g. AI). The practical AI is best off substituting for X everywhere. The practical AI is never a guaranteed utility maximizer; there will always be problems that it won't be able to solve correctly in the given time, and the important consideration is to try to be able to solve as large set of problems in limited time, as possible. That is why i have very little inclination to write some semi-formalization of what i use when i am writing code to decide on things. I would rather write specifications for making AI that actually does it; the specifications only need to be as formal as for the programmer who implements them to understand the intent. (I can alternatively implement it myself, and then it becomes completely formalized to the point that computer can run it)

comment by Dmytry · 2012-03-24T20:16:29.988Z · LW(p) · GW(p)

It's just an argument over word definitions, in any case. You can call what i propose TDT if you wish. I'm just pointing out that Eliezer, being a bright guy outside the field, is precisely the kind of person to make a naive but not stupid decision theory which matches what many people actually use in practice.

Replies from: orthonormal
comment by orthonormal · 2012-03-24T20:22:02.307Z · LW(p) · GW(p)

I edited my comment while you were replying to it. I don't think we're actually disagreeing on this particular point.

comment by handoflixue · 2012-03-26T22:50:27.538Z · LW(p) · GW(p)

Agreed. Given access to the SOURCE of the opponent, you can just RUN the opponent and observe what they'll do this round. Of course, given, say, 10 seconds to calculate between rounds, the wise opponent just ensures it will take exactly 10 seconds to run their code, that way you can either calculate your own move OR the opponent's, but not both.

Then you get in to the complexities of whether you have enough time at any given point to attempt to OPTIMIZE your opponent's equation ("Roll dice for 9.99 seconds then select rock" is very easy to optimize down to "select rock"), and at that point I think we're sufficiently outside the simplified domain of the post :)

So, basically, you're right, but I don't think it's actually a relevant critique. I would enjoy seeing it addressed in the post, though, because it bugged me enough that I was going to post about it myself :)

Replies from: orthonormal
comment by orthonormal · 2012-04-08T16:15:55.641Z · LW(p) · GW(p)

Given access to the SOURCE of the opponent, you can just RUN the opponent and observe what they'll do this round.

Main problem: if you're playing against yourself, this creates an infinite loop of you simulating you simulating you simulating...

Fortunately, it's often possible to prove things about other programs without running them.

Of course, given, say, 10 seconds to calculate between rounds, the wise opponent just ensures it will take exactly 10 seconds to run their code, that way you can either calculate your own move OR the opponent's, but not both.

In a Prisoner's Dilemma, TDT would defect if it couldn't deduce anything in time about the other player. The only way to achieve mutual cooperation is to make it easy for the other player to deduce what you're doing in that circumstance. (I'll cover this in more depth in the TDT section.) This doesn't mean "always cooperating", nor does it mean giving up mixed strategies in zero-sum games.

comment by Rhwawn · 2012-03-24T16:04:30.312Z · LW(p) · GW(p)

Upvoted- just enough math.