Problematic Problems for TDT

post by drnickbone · 2012-05-29T15:41:37.964Z · LW · GW · Legacy · 293 comments

Contents

  Some questions:
None
293 comments

A key goal of Less Wrong's "advanced" decision theories (like TDT, UDT and ADT) is that they should out-perform standard decision theories (such as CDT) in contexts where another agent has access to the decider's code, or can otherwise predict the decider's behaviour. In particular, agents who run these theories will one-box on Newcomb's problem, and so generally make more money than agents which two-box. Slightly surprisingly, they may well continue to one-box even if the boxes are transparent, and even if the predictor Omega makes occasional errors (a problem due to Gary Drescher, which Eliezer has described as equivalent to "counterfactual mugging"). More generally, these agents behave like a CDT agent will wish it had pre-committed itself to behaving before being faced with the problem.

However, I've recently thought of a class of Omega problems where TDT (and related theories) appears to under-perform compared to CDT. Importantly, these are problems which are "fair" - at least as fair as the original Newcomb problem - because the reward is a function of the agent's actual choices in the problem (namely which box or boxes get picked) and independent of the method that the agent uses to choose, or of its choices on any other problems. This contrasts with clearly "unfair" problems like the following:

Discrimination: Omega presents the usual two boxes. Box A always contains $1000. Box B contains nothing if Omega detects that the agent is running TDT; otherwise it contains $1 million.

 

So what are some fair "problematic problems"?

Problem 1: Omega (who experience has shown is always truthful) presents the usual two boxes A and B and announces the following. "Before you entered the room, I ran a simulation of this problem as presented to an agent running TDT. I won't tell you what the agent decided, but I will tell you that if the agent two-boxed then I put nothing in Box B, whereas if the agent one-boxed then I put $1 million in Box B. Regardless of how the simulated agent decided, I put $1000 in Box A. Now please choose your box or boxes."

Analysis: Any agent who is themselves running TDT will reason as in the standard Newcomb problem. They'll prove that their decision is linked to the simulated agent's, so that if they two-box they'll only win $1000, whereas if they one-box they will win $1 million. So the agent will choose to one-box and win $1 million.

However, any CDT agent can just take both boxes and win $1001000. In fact, any other agent who is not running TDT (e.g. an EDT agent) will be able to re-construct the chain of logic and reason that the simulation one-boxed and so box B contains the $1 million. So any other agent can safely two-box as well. 

Note that we can modify the contents of Box A so that it contains anything up to $1 million; the CDT agent (or EDT agent) can in principle win up to twice as much as the TDT agent.

 

Problem 2: Our ever-reliable Omega now presents ten boxes, numbered from 1 to 10, and announces the following. "Exactly one of these boxes contains $1 million; the others contain nothing. You must take exactly one box to win the money; if you try to take more than one, then you won't be allowed to keep any winnings. Before you entered the room, I ran multiple simulations of this problem as presented to an agent running TDT, and determined the box which the agent was least likely to take. If there were several such boxes tied for equal-lowest probability, then I just selected one of them, the one labelled with the smallest number. I then placed $1 million in the selected box. Please choose your box."

Analysis: A TDT agent will reason that whatever it does, it cannot have more than 10% chance of winning the $1 million. In fact, the TDT agent's best reply is to pick each box with equal probability; after Omega calculates this, it will place the $1 million under box number 1 and the TDT agent has exactly 10% chance of winning it.
 
But any non-TDT agent (e.g. CDT or EDT) can reason this through as well, and just pick box number 1, so winning $1 million. By increasing the number of boxes, we can ensure that TDT has arbitrarily low chance of winning, compared to CDT which always wins.


Some questions:

1. Have these or similar problems already been discovered by TDT (or UDT) theorists, and if so, is there a known solution? I had a search on Less Wrong but couldn't find anything obviously like them.

2. Is the analysis correct, or is there some subtle reason why a TDT (or UDT) agent would choose differently from described?

3. If a TDT agent believed (or had reason to believe) that Omega was going to present it with such problems, then wouldn't it want to self-modify to CDT? But this seems paradoxical, since the whole idea of a TDT agent is that it doesn't have to self-modify.

4. Might such problems show that there cannot be a single TDT algorithm (or family of provably-linked TDT algorithms) so that when Omega says it is simulating a TDT agent, it is quite ambiguous what it is doing? (This objection would go away if Omega revealed the source-code of its simulated agent, and the source-code of the choosing agent; each particular version of TDT would then be out-performed on a specific matching problem.)

5. Are these really "fair" problems? Is there some intelligible sense in which they are not fair, but Newcomb's problem is fair? It certainly looks like Omega may be "rewarding irrationality" (i.e. giving greater gains to someone who runs an inferior decision theory), but that's exactly the argument that CDT theorists use about Newcomb.

6. Finally, is it more likely that Omegas - or things like them - will present agents with Newcomb and Prisoner's Dilemma problems (on which TDT succeeds) rather than problematic problems (on which it fails)?

 

Edit: I tweaked the explanation of Box A's contents in Problem 1, since this was causing some confusion. The idea is that, as in the usual Newcomb problem, Box A always contains $1000. Note that Box B depends on what the simulated agent chooses; it doesn't depend on Omega predicting what the actual deciding agent chooses (so Omega doesn't put less money in any box just because it sees that the actual decider is running TDT).

293 comments

Comments sorted by top scores.

comment by jimrandomh · 2012-05-23T14:14:20.966Z · LW(p) · GW(p)

You can construct a "counterexample" to any decision theory by writing a scenario in which it (or the decision theory you want to have win) is named explicitly. For example, consider Alphabetic Decision Theory, which writes a description of each of the options, then chooses whichever is first alphabetically. ADT is bad, but not so bad that you can't make it win: you could postulate an Omega which checks to see whether you're ADT, gives you $1000 if you are, and tortures you for a year if you aren't.

That's what's happening in Problem 1, except that it's a little bit hidden. There, you have an Omega which says: if you are TDT, I will make the content of these boxes depend on your choice in such a way that you can't have both; if you aren't TDT, I filled both boxes.

You can see that something funny has hapened by postulating TDT-prime, which is identical to TDT except that Omega doesn't recognize it as a duplicate (eg, it differs in some way that should be irrelevant). TDT-prime would two-box, and win.

Replies from: ciphergoth, ewbrownv, APMason
comment by Paul Crowley (ciphergoth) · 2012-05-23T15:09:44.270Z · LW(p) · GW(p)

Right, but this is exactly the insight of this post put another way. The possibility of an Omega that rewards eg ADT is discussed in Eliezer's TDT paper. He sets out an idea of a "fair" test, which evaluates only what you do and what you are predicted to do, not what you are. What's interesting about this is that this is a "fair" test by that definition, yet it acts like an unfair test.

Because it's a fair test, it doesn't matter whether Omega thinks TDT and TDT-prime are the same - what matters is whether TDT-prime thinks so.

Replies from: loup-vaillant, Jack, jimrandomh
comment by loup-vaillant · 2012-06-25T07:16:55.438Z · LW(p) · GW(p)

Because it's a fair test

No, not even by Eliezer's standard, because TDT is not given the same problem than other decision theories.

As stated in comments below, everyone but TDT have the information "I'm not in the simulation" (or more precisely, in one of the simulations of the infinite regress that is implied by Omega's formulation). The reason TDT does not have this extra piece of information comes from the fact that it is TDT, not from any decision it may make.

Replies from: ciphergoth, APMason
comment by Paul Crowley (ciphergoth) · 2012-06-25T09:14:08.776Z · LW(p) · GW(p)

Right, and this is an unfairness that Eliezer's definition fails to capture.

Replies from: loup-vaillant
comment by loup-vaillant · 2012-06-25T11:43:57.978Z · LW(p) · GW(p)

At this point, I need the text of that definition.

Replies from: shokwave
comment by shokwave · 2012-06-25T12:04:12.126Z · LW(p) · GW(p)

The definition is in Eliezer's TDT paper although a quick grep for "fair" didn't immediately find the definition.

comment by APMason · 2012-06-25T15:40:53.628Z · LW(p) · GW(p)

This variation of the problem was invented in the follow-up post (I think it was called "Sneaky strategies for TDT" or something like that:

Omega tells you that earlier he flipped a coin. If the coin came down heads, it simulated a CDT agent facing this problem. If the coin came down tails, it simulated a TDT agent facing this problem. In either case, if the simulated agent one-boxed, there is $1000000 in Box-B; if it two-boxed Box-B is empty. In this case TDT still one-boxes (50% chance of $1000000 dominates a 100% chance of $1000), and CDT still two-boxes (because that's what CDT does). In this case, even though both agents have an equal chance of being simulated, CDT out-performs TDT (average payoffs of 500500 vs. 500000) - CDT takes advantage of TDT's prudence and TDT suffers for CDT's lack of it. Notice also that TDT cannot do better by behaving like CDT (both would get payoffs of 1000). This shows that the class of problems we're concerned with is not so much "fair" vs. "unfair", but more like "those problem on which the best I can do is not necessarily the best anyone can do". We can call it "fairness" if we want, but it's not like Omega is discriminating against TDT in this case.

Replies from: loup-vaillant
comment by loup-vaillant · 2012-06-25T16:04:04.191Z · LW(p) · GW(p)

This is not a zero-sum game. CDT does not outperform TDT here. It just makes a stupid mistake, and happens to pay it less dearly than TDT

Let's say Omega submit the same problem to 2 arbitrary decision theories. Each will either 1-box or 2-box. Here is the average payoff matrix:

  • Both a and b 1-box -> They both get the million
  • Both a and b 2-box -> They both get 1000 only.
  • One 1-boxes, the other 2-boxes -> the 1-boxer gets half a million, the other gets 5000 more.

Clearly, 1 boxing still dominates 2-boxing. Whatever the other does, you personally get about half a million more by 1-boxing. TDT may have less utility than CDT for 1-boxing, but CDT is still stupid here, while TDT is not.

comment by Jack · 2012-05-23T22:06:14.577Z · LW(p) · GW(p)

He sets out an idea of a "fair" test, which evaluates only what you do and what you are predicted to do, not what you are.

Two questions: First, how does is this distinction justified? What a decision theory is is a strategy for responding to decision tasks and simulating agents performing the right decision tasks tells you what kind of decision theory they're using. Why does it matter if it's done implicitly (as in Newcomb's discrimination against CDT) or explicitly. And second why should we care about it? Why is it important for a decision theory to pass fair tests but not unfair tests?

Replies from: APMason, ciphergoth
comment by APMason · 2012-05-24T10:47:29.494Z · LW(p) · GW(p)

Why is it important for a decision theory to pass fair tests but not unfair tests?

Well, on unfair tests a decision theory still needs to do as well as possible. If we had a version of the original Newcomb's problem, with the one difference that a CDT agent gets $1billion just for showing up, it's still incumbent upon a TDT agent to walk away with $1000000 rather than $1000. The "unfair" class of problems is that class where "winning as much as possible" is distinct from "winning the most out of all possible agents".

comment by Paul Crowley (ciphergoth) · 2012-05-24T06:50:28.492Z · LW(p) · GW(p)

Real-world unfair tests could matter, though it's not clear if there are any. However, hypothetical unfair tests aren't very informative about what is a good decision theory, because it's trivial to cook one up that favours one theory and disfavours another. I think the hope was to invent a decision theory that does well on all fair tests; the example above seems to show that may not be possible.

comment by jimrandomh · 2012-05-23T16:26:37.050Z · LW(p) · GW(p)

Not exactly. Because the problem statement says that it simulates "TDT", if you were to expand the problem statement out into code it would have to contain source code to a complete instantiation of TDT. When the problem statement is run, TDT or TDT-prime can look at that instantiation and compare it to its own source code. TDT will see that they're the same, but TDT-prime will notice that they are different, and thereby infer that it is not the simulated copy. (Any difference whatsoever is proof of this.)

Consider an alternative problem. Omega flips a coin, and asks you to guess what it was, with a prize if you guess correctly. If the coin was heads, he shows you a piece of paper with TDT's source code. If the coin was tails, he shows you a piece of paper with your source code, whatever that is.

Replies from: cousin_it
comment by cousin_it · 2012-05-23T17:54:33.247Z · LW(p) · GW(p)

I'm not sure the part about comparing source code is correct. TDT isn't supposed to search for exact copies of itself, it's supposed to search for parts of the world that are logically equivalent to itself.

Replies from: kybernetikos
comment by kybernetikos · 2012-06-06T12:05:55.465Z · LW(p) · GW(p)

The key thing is the question as to whether it could have been you that has been simulated. If all you know is that you're a TDT agent and what Omega simulated is a TDT agent, then it could have been you. Therefore you have to act as if your decision now may either real or simulated. If you know you are not what Omega simulated (for any reason), then you know that you only have to worry about the 'real' decision.

Replies from: JGWeissman, cousin_it
comment by JGWeissman · 2012-06-06T16:34:19.205Z · LW(p) · GW(p)

Suppose that Omega doesn't reveal the full source code of the simulated TDT agent, but just reveals enough logical facts about the simulated TDT agent to imply that it uses TDT. Then the "real" TDT Prime agent cannot deduce that it is different.

Replies from: kybernetikos
comment by kybernetikos · 2012-06-19T07:30:10.111Z · LW(p) · GW(p)

Yes. I think that as long as there is any chance of you being the simulated agent, then you need to one box. So you one box if Omega tells you 'I simulated some agent', and one box if Omega tells you 'I simulated an agent that uses the same decision procedure as you', but two box if Omega tells you 'I simulated an agent that had a different copywrite comment in its source code to the comment in your source code'.

This is just a variant of the 'detect if I'm in a simulation' function that others mention. i.e. if Omega gives you access to that information in any way, you can two box. Of course, I'm a bit stuck on what Omega has told the simulation in that case. Has Omega done an infinite regress?

comment by cousin_it · 2012-06-06T15:57:44.637Z · LW(p) · GW(p)

That's an interesting way to look at the problem. Thanks!

comment by ewbrownv · 2012-06-11T22:09:55.903Z · LW(p) · GW(p)

Indeed. These are all scenarios of the form "Omega looks at the source code for your decision theory, and intentionally creates a scenario that breaks it." Omega could do this with any possible decision theory (or at last, anything that could be implemented with finite resources), so what exactly are we supposed to learn by contemplating specific examples?

It seems to me that the valuable Omega thought experiments are the ones where Omega's omnipotence is simply used to force the player to stick to the rules of the given scenario. When you start postulating that an impossible, acausal superintelligence is actively working agaisnt you it's time to hang up your hat and go home, because no strategy you could possibly come up with is going to do you any good.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-24T21:57:12.544Z · LW(p) · GW(p)

The trouble is when another agent wins in this situation and in the situations you usually encounter. For example, an anti-traditional-rationalist, that always makes the opposite choice to a traditional rationalist, will one-box; it just fails spectacularly when asked to choose between different amounts of cake.

comment by APMason · 2012-05-23T14:28:04.607Z · LW(p) · GW(p)

You can see that something funny has hapened by postulating TDT-prime, which is identical to TDT except that Omega doesn't recognize it as a duplicate (eg, it differs in some way that should be irrelevant). TDT-prime would two-box, and win.

I don't think so. If TDT-prime two boxes, the TDT simulation two-boxes, so only one box is full, so TDT-prime walks away with $1000. Omega doesn't check what decision theory you're using at all - it just simulates TDT and bases its decision on that. I do think that this ought to fall outside a rigorously defined class of "fair" problems, but it doesn't matter whether Omega can recognise you as a TDT-agent or not.

Replies from: jimrandomh
comment by jimrandomh · 2012-05-23T14:30:47.891Z · LW(p) · GW(p)

I don't think so. If TDT-prime two boxes, the TDT simulation two-boxes, so only one box is full, so TDT-prime walks away with $1000.

No, if TDT-prime two boxes, the TDT simulation still one-boxes.

Replies from: APMason
comment by APMason · 2012-05-23T14:39:16.763Z · LW(p) · GW(p)

Hmm, so TDT-prime would reason something like, "The TDT simulation will one-box because, not knowing that it's the simulation, but also knowing that the simulation will use exactly the same decision theory as itself, it will conclude that the simulation will do the same thing as itself and so one-boxing is the best option. However, I'm different to the TDT-simulation, and therefore I can safely two-box without affecting its decision." In which case, does it matter how inconsequential the difference is? Yep, I'm confused.

Replies from: drnickbone, MugaSofer
comment by drnickbone · 2012-05-23T15:34:34.706Z · LW(p) · GW(p)

I also had thoughts along these lines - variants of TDT could logically separate themselves, so that T-0 one-boxes when it is simulated, but T-1 has proven that T-0 will one-box, and hence T-1 two-boxes when T-0 is the sim.

But a couple of difficulties arise. The first is that if TDT variants can logically separate from each other (i.e. can prove that their decisions aren't linked) then they won't co-operate with each other in Prisoner's Dilemma. We could end up with a bunch of CliqueBots that only co-operate with their exact clones, which is not ideal.

The second difficulty is that for each specific TDT variant, one with algorithm T' say, there will be a specific problematic problem on which T' will do worse than CDT (and indeed worse than all the other variants of TDT) - this is the problem with T' being the exact algorithm running in the sim. So we still don't get the - desirable - property that there is some sensible decision theory called TDT that is optimal across fair problems.

The best suggestion I've heard so far is that we try to adjust the definition of "fairness", so that these problematic problems also count as "unfair". I'm open to proposals on that one...

Replies from: AlexMennen, jimrandomh, APMason
comment by AlexMennen · 2012-06-04T23:39:19.374Z · LW(p) · GW(p)

But a couple of difficulties arise. The first is that if TDT variants can logically separate from each other (i.e. can prove that their decisions aren't linked) then they won't co-operate with each other in Prisoner's Dilemma. We could end up with a bunch of CliqueBots that only co-operate with their exact clones, which is not ideal.

I think this is avoidable. Let's say that there are two TDT programs called Alice and Bob, which are exactly identical except that Alice's source code contains a comment identifying it as Alice, whereas Bob's source code contains a comment identifying it as Bob. Each of them can read their own source code. Suppose that in problem 1, Omega reveals that the source code it used to run the simulation was Alice. Alice has to one-box. But Bob faces a different situation than Alice does, because he can find a difference between his own source code and the one Omega simulated, whereas Alice could not. So Bob can two-box without effecting what Alice would do.

However, if Alice and Bob play the prisoner's dilemma against each other, the situation is much closer to symmetric. Alice faces a player identical to itself except with the "Alice" comment replaced with "Bob", and Bob faces a player identical to itself except with the "Bob" comment replaced with "Alice". Hopefully, their algorithm would compress this information down to "The other player is identical to me, but has a comment difference in its source code", at which point each player would be in an identical situation.

Replies from: drnickbone, kybernetikos, MugaSofer
comment by drnickbone · 2012-06-09T11:24:08.377Z · LW(p) · GW(p)

You might want to look at my follow-up article which discusses a strategy like this (among others). It's worth noting that slight variations of the problem remove the opportunity for such "sneaky" strategies.

Replies from: AlexMennen
comment by AlexMennen · 2012-06-09T20:46:14.991Z · LW(p) · GW(p)

Ah, thanks. I had missed that, somehow.

comment by kybernetikos · 2012-06-06T12:12:51.481Z · LW(p) · GW(p)

In a prisoners dilemma Alice and Bob affect each others outcomes. In the newcomb problem, Alice affects Bobs outcome, but Bob doesn't affect Alices outcome. That's why it's OK for Bob to consider himself different in the second case as long as he knows he is definitely not Alice (because otherwise he might actually be in a simulation) but not OK for him to consider himself different in the prisoners dilemma.

comment by MugaSofer · 2012-12-25T16:13:32.000Z · LW(p) · GW(p)

However, if Alice and Bob play the prisoner's dilemma against each other, the situation is much closer to symmetric. Alice faces a player identical to itself except with the "Alice" comment replaced with "Bob", and Bob faces a player identical to itself except with the "Bob" comment replaced with "Alice". Hopefully, their algorithm would compress this information down to "The other player is identical to me, but has a comment difference in its source code", at which point each player would be in an identical situation.

Why doesn't that happen when dealing with Omega?

Replies from: AlexMennen
comment by AlexMennen · 2012-12-25T20:01:22.132Z · LW(p) · GW(p)

Because if Omega uses Alice's source code, then Alice sees that the source code of the simulation is exactly the same as hers, whereas Bob sees that there is a comment difference, so the situation is not symmetric.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-25T22:21:11.337Z · LW(p) · GW(p)

So why doesn't that happen in the prisoner's dilemma?

Replies from: AlexMennen
comment by AlexMennen · 2012-12-25T22:47:57.974Z · LW(p) · GW(p)

Because Alice sees that Bob's source code is the same as hers except for a comment difference, and Bob sees that Alice's source code is the same as his except for a comment difference, so the situation is symmetric.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-26T01:32:52.098Z · LW(p) · GW(p)

Newcomb:

Bob sees that there is a comment difference, so the situation is not symmetric.

Prisoner's Dilemma:

Bob sees that Alice's source code is the same as his except for a comment difference, so the situation is symmetric.

Do you see the contradiction here?

Replies from: AlexMennen
comment by AlexMennen · 2012-12-26T01:59:23.596Z · LW(p) · GW(p)

Newcomb, Alice: The simulation's source code and available information is literally exactly the same as Alice's, so if Alice 2-boxes, the simulation will too. There's no way around it. So Alice one-boxes.

Newcomb, Bob: The simulation was in the situation described above. Bob thus predicts that it will one-box. Bob himself is in an entirely different situation, since he can see a source code difference, so if he two-boxes, it does not logically imply that the simulation will two-box. So Bob two-boxes and the simulation one-boxes.

Prisoner's Dilemma: Alice sees Bob's source code, and summarizes it as "identical to me except for a different comment". Bob sees Alice's source code, and summarizes it as "identical to me except for a different comment". Both Alice and Bob run the same algorithm, and they now have the same input, so they must produce the same result. They figure this out, and cooperate.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-26T02:15:28.411Z · LW(p) · GW(p)

Ignore Alice's perspective for a second. Why is Bob acting differently? He's seeing the same code both times.

Replies from: AlexMennen
comment by AlexMennen · 2012-12-26T02:21:57.738Z · LW(p) · GW(p)

Don't ignore Alice's perspective. Bob knows what Alice's perspective is, so since there is a difference in Alice's perspective, there is by extension a difference in Bob's perspective.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-26T02:25:33.569Z · LW(p) · GW(p)

Bob looks at the same code both times. In the PD, he treats it as identical to his own. In NP, he treats it as different. Why?

Replies from: AlexMennen
comment by AlexMennen · 2012-12-26T02:31:59.104Z · LW(p) · GW(p)

The source code that Bob is looking at is the same in each case, but the source code that [the source code that Bob is looking at] is looking at is different in the two situations.

NP: Bob is looking at Alice, who is looking at Alice, who is looking at Alice, ...

PD: Bob is looking at Alice, who is looking at Bob, who is looking at Alice, ...

Clarifying edit: In both cases, Bob concludes that the source code he is looking at is functionally equivalent to his own. But in NP, Bob treats the input to the program he is looking at as different from his input, whereas in PD, Bob treats the input to the program he is looking at as functionally equivalent to his input.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-26T02:38:30.052Z · LW(p) · GW(p)

PD: Bob is looking at Alice, who is looking at Bob, who is looking at Alice, ...

But you said Bob concludes that their decision theories are functionally identical, and thus it reduces to:

PD: TDT is looking at TDT, who is looking at TDT, who is looking at TDT, ...

And yet this does not occur in NP.

EDIT:

The source code that Bob is looking at is the same in each case, but the source code that [the source code that Bob is looking at] is looking at is different in the two situations.

The point is that his judgement of the source code changes, from "some other agent" to "another TDT agent".

Replies from: AlexMennen
comment by AlexMennen · 2012-12-26T02:42:06.836Z · LW(p) · GW(p)

Looks like my edit was poorly timed.

Clarifying edit: In both cases, Bob concludes that the source code he is looking at is functionally equivalent to his own. But in NP, Bob treats the input to the program he is looking at as different from his input, whereas in PD, Bob treats the input to the program he is looking at as functionally equivalent to his input.

One way of describing it is that the comment is extra information that is distinct from the decision agent, and that Bob can make use of this information when making his decision.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-26T03:01:16.624Z · LW(p) · GW(p)

Oops, didn't see that.

What's the point of adding comments if Bob's just going to conclude their code is functionally identical anyway? Doesn't that mean that you might as well use the same code for Bob and Alice, and call it TDT?

Replies from: AlexMennen
comment by AlexMennen · 2012-12-26T04:02:40.242Z · LW(p) · GW(p)

In NP, the comments are to provide Bob an excuse to two-box that does not result in the simulation two-boxing. In PD, the comments are there to illustrate that TDT needs a sophisticated algorithm for identifying copies of itself that can recognize different implementations of the same algorithm.

Do you understand why Bob acts differently in the two situations, now?

Replies from: MugaSofer
comment by MugaSofer · 2012-12-26T15:34:24.273Z · LW(p) · GW(p)

In NP, the comments are to provide Bob an excuse to two-box that does not result in the simulation two-boxing.

I was assuming Bob was an AI, lacking a ghost to look over his code for reasonableness. If he's not, then he isn't strictly implementing TDT, is he?

Replies from: AlexMennen
comment by AlexMennen · 2012-12-26T18:09:29.512Z · LW(p) · GW(p)

Bob is an AI. He's programmed to look for similarities between other AIs and himself so that he can treat their action and his as logically linked when it is to his advantage to do so. I was arguing that a proper implementation of TDT should consider Bob's and Alice's decisions linked in PD and nonlinked in the NP variant. I don't really understand your objection.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-27T02:09:57.699Z · LW(p) · GW(p)

My objection is that an AI looking at the same question - is Alice functionally identical to me - can't look for excuses why they're not really the same when this would be useful, if they actually behave the same way. His answer should be the same in both cases, because they are either functionally identical or not.

Replies from: AlexMennen
comment by AlexMennen · 2012-12-27T02:20:30.461Z · LW(p) · GW(p)

The proper question is "In the context of the problems each of us face, is there a logical connection between my actions and Alice's actions?", not "Is Alice functionally identical to me?"

Replies from: MugaSofer
comment by MugaSofer · 2012-12-27T02:53:55.406Z · LW(p) · GW(p)

I think those terms both mean the same thing.

For reference, by "functionally identical" I meant "likely to choose the same way I do". Thus, an agent that will abandon the test to eat beans is functionally identical when beans are unavailable.

Replies from: AlexMennen
comment by AlexMennen · 2012-12-27T03:06:04.776Z · LW(p) · GW(p)

I guess my previous response was unhelpful. Although "Is Alice functionally identical to me?" is not the question of primary concern, it is a relevant question. But another relevant question is "Is Alice facing the same problem that I am?" Two functionally identical agents facing different problems may make different choices.

In the architecture I've been envisioning, Alice and Bob can classify other agents as "identical to me in both algorithm and implementation" or "identical to me in algorithm, with differing implementation", or one of many other categories. For each of the two categories I named, they would assume that an agent in that category will make the same decision as they would when presented with the same problem (so they would both be subcategories of "functionally identical"). In both situations, each agent classifies the other as identical in algorithm and differing in implementation.

In the prisoners' dilemma, each agent is facing the same problem, that is, "I'm playing a prisoner's dilemma with another agent that is identical to me in algorithm but differing in implementation". So they treat their decisions as linked.

In the Newcomb's problem variant, Alice's problem is "I'm in Newcomb's problem, and the predictor used a simulation that is identical to me in both algorithm and implementation, and which faced the same problem that I am facing." Bob's problem is "I'm in Newcomb's problem, and the predictor used a simulation that is identical to me in algorithm but differing in implementation, and which faced the same situation as Alice." There was a difference in the two problem descriptions even before the part about what problem the simulation faced, so when Bob notes that the simulation faced the same problem as Alice, he finds a difference between the problem that the simulation faced and the problem that he faces.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-27T03:43:24.138Z · LW(p) · GW(p)

For each of the two categories I named, they would assume that an agent in that category will make the same decision as they would when presented with the same problem (so they would both be subcategories of "functionally identical").

Then why are we talking about "Bob" and "Alice" when they're both just TDT agents?

Replies from: AlexMennen
comment by AlexMennen · 2012-12-27T03:53:18.479Z · LW(p) · GW(p)

Because if Bob does not ignore the implementation difference, he ends up with more money in the Newcomb's problem variant.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-27T04:04:01.155Z · LW(p) · GW(p)

But there is no difference between "Bob looking at Alice looking at Bob" and "Alice looking at Alice looking at Alice". That's the whole point of TDT.

Replies from: AlexMennen
comment by AlexMennen · 2012-12-27T04:37:25.242Z · LW(p) · GW(p)

There is a difference. In the first one, the agents have a slight difference in their source code. In the second one, the source code of the two agents is identical.

If you're claiming that TDT does not pay attention to such differences, then we only have a definitional dispute, and by your definition, an agent programmed the way I described would not be TDT. But I can't think of anything about the standard descriptions of TDT that would indicate such a restriction. It is certainly not the "whole point" of TDT.

For now, I'm going to call the thing you're telling me TDT is "TDT1", and I'm going to call the agent architecture I was describing "TDT2". I'm not sure if this is good terminology, so let me know if you'd rather call them something else.

Anyway, consider the four programs Alice1, Bob1, Alice2, and Bob2. Alice1 and Bob1 are implementations of TDT1, and are identical except for having a different identifier in the comments (and this difference changes nothing). Alice2 and Bob2 are implementations of TDT2, and are identical except for having a different identifier in the comments.

Consider the Newcomb's problem variant with the first pair of agents (Alice1 and Bob1). Alice1 is facing the standard Newcomb's problem, so she one-boxes and gets $1,000,000. As far as Bob1 can tell, he also faces the standard Newcomb's problem (there is a difference, but he ignores it), so he one-boxes and gets $1,000,000.

Now consider the same problem, but with all instances of Alice1 replaced with Alice2, and all instances of Bob1 replaced with Bob2. Alice2 still faces the standard Newcomb's problem, so she one-boxes and gets $1,000,000. But Bob2 two-boxes and gets $1,001,000.

The problem seems pretty fair; it doesn't specifically reference either TDT1 or TDT2 in an attempt to discriminate. However, when we replace the TDT1 agents with TDT2 agents, one of them does better and neither of them does worse, which seems to indicate a pretty serious deficiency in TDT1.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-27T16:33:30.829Z · LW(p) · GW(p)

Either TDT decides if something is identical based on it's actions, in which case I am right, or it's source code, in which case you are wrong, because such an agent would not cooperate in the Prisoner's Dilemma.

Replies from: AlexMennen
comment by AlexMennen · 2012-12-27T18:50:32.739Z · LW(p) · GW(p)

They decide using the source code. I already explained why this results in them cooperating in the Prisoner's Dilemma.

In the architecture I've been envisioning, Alice and Bob can classify other agents as "identical to me in both algorithm and implementation" or "identical to me in algorithm, with differing implementation", or one of many other categories. For each of the two categories I named, they would assume that an agent in that category will make the same decision as they would when presented with the same problem (so they would both be subcategories of "functionally identical"). In both situations, each agent classifies the other as identical in algorithm and differing in implementation.

In the prisoners' dilemma, each agent is facing the same problem, that is, "I'm playing a prisoner's dilemma with another agent that is identical to me in algorithm but differing in implementation". So they treat their decisions as linked.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-28T18:16:39.161Z · LW(p) · GW(p)

Wait! I think I get it! In a Prisoner's Dilemma, both agents are facing another agent, whereas in Newcomb's Problem, Alice is facing an infinite chain of herself, whereas Bob is facing an infinite chain of someone else. It's like the "favorite number" example in the followup post.

Replies from: AlexMennen
comment by AlexMennen · 2012-12-28T23:29:18.808Z · LW(p) · GW(p)

Yes.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-29T15:41:14.178Z · LW(p) · GW(p)

Well that took embarrassingly long.

comment by jimrandomh · 2012-05-23T20:14:02.249Z · LW(p) · GW(p)

The right place to introduce the separation is not in between TDT and TDT-prime, but in between TDT-prime's output and TDT-prime's decision. If its output is a strategy, rather than a number of boxes, then that strategy can include a byte-by-byte comparison; and if TDT and TDT-prime both do it that way, then they both win as much as possible.

Replies from: dlthomas, drnickbone
comment by dlthomas · 2012-05-23T20:25:17.296Z · LW(p) · GW(p)

But doesn't that make cliquebots, in general?

comment by drnickbone · 2012-05-24T12:08:43.885Z · LW(p) · GW(p)

I'm thinking hard about this one...

Can all the TDT variants adopt a common strategy, but with different execution results, depending on source-code self-inspection and sim-inspection? Can that approach really work in general without creating CliqueBots? Don't know yet without detailed analysis.

Another issue is that Omega is not obliged to reveal the source-code of the sim; it could instead provide some information about the method used to generate / filter the sim code (e.g. a distribution the sim was drawn from) and still lead to a well-defined problem. Each TDT variant would not then know whether it was the sim.

I'm aiming for a follow-up article addressing this strategy (among others).

Replies from: khafra
comment by khafra · 2012-05-24T17:57:56.203Z · LW(p) · GW(p)

Can all the TDT variants adopt a common strategy, but with different execution results, depending on source-code self-inspection and sim-inspection?

This sounds equivalent to asking "can a turing machine generate non-deterministically random numbers?" Unless you're thinking about coding TDT agents one at a time and setting some constant differently in each one.

comment by APMason · 2012-05-23T16:22:55.801Z · LW(p) · GW(p)

Well, I've had a think about it, and I've concluded that it would matter how great the difference between TDT and TDT-prime is. If TDT-prime is almost the same as TDT, but has an extra stage in its algorithm in which it converts all dollar amounts to yen, it should still be able to prove that it is isomorphic to Omega's simulation, and therefore will not be able to take advantage of "logical separation".

But if TDT-prime is different in a way that makes it non-isomorphic, i.e. it sometimes gives a different output given the same inputs, that may still not be enough to "separate" them. If TDT-prime acts the same as TDT, except when there is a walrus in the vicinity, in which case it tries to train the walrus to fight crime, it is still the case in this walrus-free problem that it makes exactly the same choice as the simulation (?). It's as if you need the ability to prove that two agents necessarily give the same output for the particular problem you're faced with, without proving what output those agents actually give, and that sure looks crazy-hard.

EDIT: I mean crazy-hard for the general case, but much, much easier for all the cases where the two agents are actually the same.

EDIT 2: On the subject of fairness, my first thoughts: A fair problem is one in which if you had arrived at your decision by a coin flip (which is as transparently predictable as your actual decision process - i.e. Omega can predict whether it's going to come down heads or tails with perfect accuracy), you would be rewarded or punished no more or less than you would be using your actual decision algorithm (and this applies to every available option).

EDIT 3: Sorry to go on like this, but I've just realised that won't work in situations where some other agent bases their decision on whether you're predicting what their decision will be, i.e. Prisoner's Dilemma.

comment by MugaSofer · 2012-12-25T16:07:16.427Z · LW(p) · GW(p)

Yep, I'm confused.

Sounds like you have it exactly right.

comment by Ezekiel · 2012-05-23T11:03:55.441Z · LW(p) · GW(p)

I think we could generalise problem 2 to be problematic for any decision theory XDT:

There are 10 boxes, numbered 1 to 10. You may only take one. Omega has (several times) run a simulated XDT agent on this problem. It then put a prize in the box which it determined was least likely to be taken by such an agent - or, in the case of a tie, in the box with the lowest index.

If agent X follows XDT, it has at best a 10% chance of winning. Any sufficiently resourceful YDT agent, however, could run a simulated XDT agent themselves, and figure out what Omega's choice was without getting into an infinite loop.

Therefore, YDT performs better than XDT on this problem.

If I'm right, we may have shown the impossibility of a "best' decision theory, no matter how meta you get (in a close analogy to Godelian incompleteness). If I'm wrong, what have I missed?

Replies from: cousin_it, dlthomas, Bundle_Gerbe, MugaSofer
comment by cousin_it · 2012-05-23T11:33:55.710Z · LW(p) · GW(p)

You're right about problem 2 being a fully general counterargument, but your philosophical conclusion seems to be stopping too early. For example, can we define a class of "fair" problems that excludes problem 2?

Replies from: Ezekiel, ciphergoth
comment by Ezekiel · 2012-05-23T22:36:28.547Z · LW(p) · GW(p)

It looks like the issue here is that while Omega is ostensibly not taking into account your decision theory, it implicitly is by simulating an XDT agent. So a first patch would be to define simulations of a specific decision theory (as opposed to simulations of a given agent) as "unfair".

On the other hand, we can't necessarily know if a given computation is effectively equivalent to simulating a given decision theory. Even if the string "TDT" is never encoded anywhere in Omega's super-neurons, it might still be simulating a TDT agent, for example.

On the first hand again, it might be easy for most problems to figure out whether anyone is implicitly favouring one DT over another, and thus whether they're "fair".

comment by Paul Crowley (ciphergoth) · 2012-05-23T12:11:09.827Z · LW(p) · GW(p)

One possible place to look is that we're allowing Omega access not just to a particular simulated decision of TDT, but to the probabilities with which it makes these decisions. If we force it to simulate TDT many times and sample to learn what the probabilities are, it can't detect the exact balance for which it does deterministic symmetry breaking, and the problem goes away.

This solution occurred to me because this forces Omega to have something like a continuous behaviour response to changes in the probabilities of different TDT outputs, and it seems possible given that to imagine a proof that a fixed point must exist.

Replies from: drnickbone
comment by drnickbone · 2012-05-23T13:20:26.514Z · LW(p) · GW(p)

Fair point - how does Omega tell when the sim's choosing probabilities are exactly equal? Well I was thinking that Omega could prove they are equal (by analysing the simulation's behaviour, and checking where it calls on random bits). Or if it can't do that, then it can just check that the choice frequencies are "statistically equal" (i.e. no significant differences after a billion runs, say) and treat them as equal for the tie-breaker rule. The "statistically equal" approach might give the TDT agent a very slightly higher than 10% chance of winning the money, though I haven't analysed this in any detail.

Replies from: Ezekiel
comment by Ezekiel · 2012-05-23T22:26:31.926Z · LW(p) · GW(p)

If the subject can know the exact code of TDT, Omega can know the exact code of TDT, and analyse it however it likes. That means it can know exactly where randomness is invoked - why would it have to sample?

Replies from: drnickbone
comment by drnickbone · 2012-05-24T11:59:02.050Z · LW(p) · GW(p)

This was my first thought: Omega can just prove the choosing probabilities are equal. However, it's not totally straightforward, because the sim could sample more random bits depending on the results of its first random bits, and so on, leading to an exponentially growing outcome tree of possibilities, with no upper size bound to the length of the tree. There might not be an easy proof of equality in that case. Sampling and statistical equality is the next best approach...

comment by dlthomas · 2012-05-23T23:28:34.012Z · LW(p) · GW(p)

If I'm right, we may have shown the impossibility of a "best' decision theory, no matter how meta you get (in a close analogy to Godelian incompleteness). If I'm wrong, what have I missed?

I would say that any such problem doesn't show that there is no best decision theory, it shows that that class of problem cannot be used in the ranking.

Edited to add: Unless, perhaps, one can show that an instantiation of the problem with particular choice of (in this case decision theory, but whatever is varied) is particularly likely to be encountered.

comment by Bundle_Gerbe · 2012-05-30T21:27:02.025Z · LW(p) · GW(p)

To draw out the analogy to Godelian incompleteness, any computable decision theory is subject to the suggested attack of being given a "Godel problem'' like problem 1, just as any computable set of axioms for arithmetic has a Godel sentence. You can always make a new decision theory TDT' that is TDT+ do the right thing for the Godel problem. But TDT' has it's own Godel problem of course. You can't make a computable theory that says "do the right thing for all Godel probems", if you try to do that it would not give you something computable. I'm sure this is all just restating what you had in mind, but I think it's worth spelling out.

If you have some sort of oracle for the halting problem (i.e. a hypercomputer) and Omega doesn't, he couldn't simulate you, so you would presumably be able to always win fair problems. Otherwise the best thing you could hope for is to get the right answer whenever your computation halts, but fail to halt in your computation for some problems, such as your Godel problem. (A decision theory like this can still be given a Godel problem if Omega can solve the halting problem, "I simulated you and if you fail to halt on this problem..."). I wonder if TDT fails to halt for its Godel problem, or if some natural modification of it might have this property, but I don't understand it well enough to guess.

I am less optimistic about revising "fair" to exclude Godel problems. The analogy would be proving Peano arithmetic is complete "except for things that are like Godel sentences." I don't know of any formalizations of the idea of "being a Godel sentence".

comment by MugaSofer · 2012-12-25T16:24:51.031Z · LW(p) · GW(p)

If I'm right, we may have shown the impossibility of a "best' decision theory, no matter how meta you get (in a close analogy to Godelian incompleteness). If I'm wrong, what have I missed?

You're right. However, since all decision theories fail when confronted with their personal version of this problem, but may or may not fail in other problems, then some decision theories may be better than others. The one that is better than all the others is thus the "best" DT.

comment by Vladimir_Nesov · 2012-05-23T11:41:30.083Z · LW(p) · GW(p)

Consider Problem 3: Omega presents you with two boxes, one of which contains $100, and says that it just ran a simulation of you in the present situation and put the money in the box the simulation didn't choose.

This is a standard diagonal construction, where the environment is set up so that you are punished for the actions you choose, and rewarded for those of don't choose, irrespective of the actions. This doesn't depend on the decision algorithm you're implementing. A possible escape strategy is to make yourself unpredictable to the environment. The difficulty would also go away if the thing being predicted wasn't you, but something else you could predict as well (like a different agent that doesn't simulate you).

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2012-05-23T11:57:12.724Z · LW(p) · GW(p)

The correct solution to this problem is to choose each box with equal probability; this problem is the reason why decision theories have to be non-deterministic. It comes up all the time in real life: I try and guess what safe combination you chose, try that combination, and if it works I take all your money. Or I try to guess what escape route you'll use and post all the guards there.

What's interesting about Problem 2 is that it makes what would be the normal game-theoretic strategy unstable by choosing deterministically where the probabilities are exactly equal.

Replies from: APMason
comment by APMason · 2012-05-23T12:28:15.552Z · LW(p) · GW(p)

this problem is the reason why decision theories have to be non-deterministic. It comes up all the time in real life: I try and guess what safe combination you chose, try that combination, and if it works I take all your money.

Of course, you can just set up the thought experiment with the proviso that "be unpredictable" is not a possible move - in fact that's the whole point of Omega in these sorts of problems. If Omega's trying to break into your safe, he takes your money. In Nesov's problem, if you can't make yourself unpredictable, then you win nothing - it's not even worth your time to open the box. In both cases, a TDT agent does strictly as well as it possibly could - the fact that there's $100 somewhere in the vicinity doesn't change that.

comment by Wei Dai (Wei_Dai) · 2012-05-23T19:16:17.542Z · LW(p) · GW(p)

My sense is that question 6 is a better question to ask than 5. That is, what's important isn't drawing some theoretical distinction between fair and unfair problems, but finding out what problems we and/or our agents will actually face. To the extent that we are ignorant of this now but may know more in the future when we are smarter and more powerful, it argues for not fixing a formal decision theory to determine our future decisions, but instead making sure that we and/or our agents can continue to reason about decision theory the same way we currently can (i.e., via philosophy).

comment by Paul Crowley (ciphergoth) · 2012-05-23T10:37:18.433Z · LW(p) · GW(p)

BTW, general question about decision theory. There appears to have been an academic study of decision theory for over a century, and causal and evidential decision theory were set out in 1981. Newcomb's paradox was set out in 1969. Yet it seems as though no-one thought to explore the space beyond these two decision theories until Eliezer proposed TDT, and it seems as if there is a 100% disconnect between the community exploring new theories (which is centered around LW) and the academic decision theory community. This seems really, really odd - what's going on?

Replies from: Jayson_Virissimo, Eliezer_Yudkowsky, thomblake, steven0461, private_messaging, taw
comment by Jayson_Virissimo · 2012-05-23T12:59:51.074Z · LW(p) · GW(p)

Yet it seems as though no-one thought to explore the space beyond these two decision theories until Eliezer proposed TDT...

This is simply not true. Robert Nozick (who introduced Newcomb's problem to philosophers) compared/contrasted EDT and CDT at least as far back as 1993. Even back then, he noted their inadequacy on several decision-theoretic problems and proposed some alternatives.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2012-05-23T13:14:53.614Z · LW(p) · GW(p)

Me being ignorant of something seemed like a likely part of the explanation - thanks :) I take it you're referencing "The Nature of Rationality"? Not read that I'm afraid. If you can spare the time I'd be interested to know what he proposes -thanks!

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-05-23T13:49:20.512Z · LW(p) · GW(p)

I haven't read The Nature of Rationality in quite a long time, so I won't be of much help. For a very simple and short introduction to Nozick's work on decision theory, you should read this (PDF).

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-05-23T19:47:30.408Z · LW(p) · GW(p)

There were plenty of previous theories trying to go beyond CDT or EDT, they just weren't satisfactory.

Replies from: crazy88, Manfred
comment by crazy88 · 2012-05-24T21:36:36.070Z · LW(p) · GW(p)

This paper talks about reflexive decision models and claims to develop a form of CDT which one boxes.

It's in my to-read list but I haven't got to it yet so I'm not sure whether it's of interest but I'm posting it just in case (it could be a while until I have time to read it so I won't be able to post a more informed comment any time soon).

Though this theory post-dates TDT and so isn't interesting from that perspective.

comment by Manfred · 2012-05-24T19:49:00.576Z · LW(p) · GW(p)

Dispositional decision theory :P

... which I cannot find a link to the paper for, now. Hm. But basically it was just TDT, with less awareness of why.

EDIT: Ah, here it was. Credit to Tim Tyler.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-05-24T20:27:52.911Z · LW(p) · GW(p)

I checked it. Not the same thing.

comment by thomblake · 2012-05-23T13:47:36.452Z · LW(p) · GW(p)

It should be noted that Newcomb's problem was considered interesting in Philosophy in 1969, but decision theories were studied more in other fields - so there's a disconnect between the sorts of people who usually study formal decision theories and that sort of problem.

comment by steven0461 · 2012-05-23T16:29:22.594Z · LW(p) · GW(p)

(Deleting comments seems not to be working. Consider this a manual delete.)

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-05-23T16:50:21.645Z · LW(p) · GW(p)

Decision Theory is and can be applied to a variety of problems here. It's just that AI may face Newcomb-like problems and in particular we want to ensure a 1-boxing-like behavior on the part of AI.

Replies from: cousin_it, David_Gerard
comment by cousin_it · 2012-05-23T19:09:45.309Z · LW(p) · GW(p)

The rationale for TDT-like decision theories is even more general, I think. There's no guarantee that our world contains only one copy of something. We want a decision theory that would let the AI cooperate with its copies or logical correlates, rather than wage pointless wars.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-05-23T21:12:16.091Z · LW(p) · GW(p)

We want a decision theory that would let the AI cooperate with its copies or logical correlates, rather than wage pointless wars.

Constructing rigorous mathematical foundation of decision theory to explain what a decision problem or a decision or a goal are, is potentially more useful than resolving any given informally specified class of decision problems.

comment by David_Gerard · 2012-05-24T12:29:59.325Z · LW(p) · GW(p)

What is an example of such a real-world problem?

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-05-24T18:09:13.039Z · LW(p) · GW(p)

Negotiations with entities who can read the AI's source code.

Replies from: bbarth
comment by bbarth · 2012-06-03T18:35:06.164Z · LW(p) · GW(p)

Given the week+ delay in this response, it's probably not going to see much traffic, but I'm not convinced "reading" source code is all that helpful. Omega is posited to have nearly god-like abilities in this regard, but since this is a rationalist discussion, we probably have to rule out actual omnipotence.

If Omega intends to simply run the AI on spare hardware it has, then it has to be prepared to validate (in finite time and memory) that the AI hasn't so obfuscated its source as to be unintelligible to rational minds. It's also possible that the source to an AI is rather simple but it is dependent a large amount of input data in the form of a vast sea of numbers. I.e., the AI in question could be encoded as an ODE system integrator that's reliant on a massive array of parameters to get from one state to the next. I don't see why we should expect Omega to be better at picking out the relevant, predictive parts of these numbers than we are.

If the AI can hide things in its code or data, then it can hide functionality that tests to determine if it is being run by Omega or on its own protected hardware. In such a case it can lie to Omega just as easily as Omega can lie to the "simulated" version of the AI.

I think it's time we stopped positing an omniscient Omega in these complications to Newcomb's problem. They're like epicycles on Ptolemaic orbital theory in that they continue a dead end line of reasoning. It's better to recognize that Newcomb's problem is a red herring. Newcomb's problem doesn't demonstrate problems that we should expect AI's to solve in the real world. It doesn't tease out meaningful differences between decision theories.

That is, what decisions on real-world problems do we expect to be different between two AIs that come to different conclusions about Newcomb-like problems?

Replies from: Dolores1984
comment by Dolores1984 · 2012-06-03T19:00:25.585Z · LW(p) · GW(p)

You should note that every problem you list is a special case. Obviously, there are ways of cheating at Newcomb's problem if you're aware of salient details beforehand. You could simply allow a piece of plutonium to decay, and do whatever the resulting Geiger counter noise tells you to. That does not, however, support your thesis that Newcomb's problem is a totally artificial problem with no logical intrusions into reality.

As a real-world example, imagine an off-the-shelf stock market optimizing AI. Not sapient, to make things simpler, but smart. When any given copy begins running, there are already hundreds or thousands of near-identical copies running elsewhere in the market. If it fails to predict their actions from its own, it will do objectively worse than it might otherwise do.

Replies from: bbarth
comment by bbarth · 2012-06-04T02:57:36.166Z · LW(p) · GW(p)

i don't see how your example is apt or salient. My thesis is that Newcomb-like problems are the wrong place to be testing decision theories because they do not represent realistic or relevant problems. We should focus on formalizing and implementing decision theories and throw real-world problems at them rather than testing them on arcane logic puzzles.

Replies from: Dolores1984
comment by Dolores1984 · 2012-06-04T03:24:51.483Z · LW(p) · GW(p)

Well... no, actually. A good decision theory ought to be universal. It ought to be correct, and it ought to work. Newcomb's problem is important, not because it's ever likely to happen, but because it shows a case in which the normal, commonly accepted approach to decision theory (CDT) failed miserably. This 'arcane logic puzzle' is illustrative of a deeper underlying flaw in the model, which needs to be addressed. It's also a flaw that'd be much harder to pick out by throwing 'real world' problems at it over and over again.

Replies from: bbarth
comment by bbarth · 2012-06-04T12:54:49.779Z · LW(p) · GW(p)

Seems unlikely to work out to me. Humans evolved intelligence without Newcomb-like problems. As the only example of intelligence that we know of, it's clearly possible to develop intelligence without Newcomb-like problems. Furthermore, the general theory seems to be that AIs will start dumber than humans and iteratively improve until they're smarter. Given that, why are we so interested in problems like these (which humans don't universally agree about the answers to)?

I'd rather AIs be able to help us with problems like "what should we do about the economy?" or even "what should I have for dinner?" instead of worrying about what we should do in the face of something godlike.

Additionally, human minds aren't universal (assuming that universal means that they give the "right" solutions to all problems), so why should we expect AIs to be? We certainly shouldn't expect this if we plan on iteratively improving our AIs.

Replies from: bbarth
comment by bbarth · 2012-06-05T14:43:15.070Z · LW(p) · GW(p)

Harsh crowd.

It might be nice to be able to see the voting history (not the voters' names, but the number of up and down votes) on a comment. I can't tell if my comments are controversial or just down-voted by two people. Perhaps even just the number of votes would be sufficient (e.g. -2/100 vs. -2/2).

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-05T14:57:26.103Z · LW(p) · GW(p)

If it helps: it's a fairly common belief in this community that a general-purpose optimization tool is both far superior to, and more interesting to talk about, than a variety of special-purpose tools.

Of course, that doesn't mean you have to be interested in general-purpose optimization tools; if you're more interested in decision theory for dinner-menu or economic planners, by all means post about that if you have something to say.

But I suspect there are relatively few communities in which "why are you all so interested in such a stupid and uninteresting topic?" will get you much community approval, and this isn't one of them.

Replies from: bbarth
comment by bbarth · 2012-06-05T16:49:47.763Z · LW(p) · GW(p)

I'm interested in general purpose optimizers, but I bet that they will be evolved from AIs that were more special purpose to begin with. E.g., IBM Watson moving from Jeopardy!-playing machine to medical diagnostic assistant with a lot of the upfront work being on rapid NLP for the J! "questions".

Also, there's no reason that I've seen here to believe that Newcomb-like problems give insights into how to develop to decision theories that allow us to solve real-world problems. It seems like arguing about corner cases. Can anyone establish a practical problem that TDT fails to solve because it fails to solve these other problems?

Beyond this, my belief is that without formalization and programming of these decision frameworks, we learn very little. Asking what does xDT do in some abstract situation, so far, seems very handy-wavy. Furthermore, it seems to me that the community is drawn to these problems because they are deceptively easy to state and talk about online, but minds are inherently complex, opaque, and hard to reason about.

I'm having a hard time understanding how correctly solving Newcomb-like problems is expected to advance the field of general optimizers. It seems out of proportion to the problems at hand to expect a decision theory to solve problems of this level of sophistication when the current theories don't seem to obviously "solve" questions like "what should we have for lunch?". I get the feeling that supporters of research on these theories assume that, of course, xDT can solve the easy problems so let's do the hard ones. And, I think evidence for this assumption is very lacking.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-05T16:57:15.998Z · LW(p) · GW(p)

That's fair.

Again, if you are interested in more discussion about automated optimization on the level of "what should we have for lunch?" I encourage you to post about it; I suspect a lot of other people are interested as well.

Replies from: bbarth
comment by bbarth · 2012-06-05T18:53:19.526Z · LW(p) · GW(p)

Yeah, I might, but here I was just surprised by the down-voting for contrary opinion. It seems like the thing we ought to foster not hide.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-05T19:35:32.394Z · LW(p) · GW(p)

As I tried to express in the first place, I suspect what elicited the disapproval was not the contrary opinion, but the rudeness.

Replies from: bbarth
comment by bbarth · 2012-06-06T19:55:40.178Z · LW(p) · GW(p)

Sorry. It didn't seem rude to me. I'm just frustrated with where I see folks spending their time.

My apologies to anyone who was offended.

comment by private_messaging · 2012-05-26T02:48:15.675Z · LW(p) · GW(p)

This seems really, really odd - what's going on?

What you'd expect? The usual: half educated people making basic errors, not making sure their decision theories work on 'trivial' problems, not doing due work to find flaws in own ideas, hence announcing solutions to hard problems that others don't announce. Same as asking why only some coldfusion community solved world's energy problems.

edit: actually, in all fairness, I think there can be not bad ideas to explore in work you see on LW. It is just that what you see normally published as 'decision theory' is pretty well formalized and structured in such a way that one wouldn't have to search enormous space of possible flaws and possible steel-man and possible flaws in steel-man etc, to declare something invalid (that is the point of writing things formally and making mathematical proofs, that you can expect to see if its wrong). I don't see any to-the-point formal papers on TDT here.

comment by taw · 2012-05-23T12:49:09.918Z · LW(p) · GW(p)

Crackpot Decision Theories popular around here do not solve any real problem arising from laws of causality operating normally, so there's no point studying them seriously.

Your question is like asking why there's no academic interest in Harry Potter Physics or Geography of Westros.

Replies from: ciphergoth, army1987, None
comment by Paul Crowley (ciphergoth) · 2012-05-23T12:56:25.224Z · LW(p) · GW(p)

Err, this would also predict no academic interest in Newcomb's Problem, and that isn't so.

Replies from: taw
comment by taw · 2012-05-23T17:16:53.432Z · LW(p) · GW(p)

Not counting philosophers, where's this academic interest in Newcomb's paradox?

Replies from: APMason, Douglas_Knight
comment by APMason · 2012-05-23T17:19:14.029Z · LW(p) · GW(p)

Why are we not counting philosophers? Isn't that like saying, "Not counting physicists, where's this supposed interest in gravity?"

Replies from: Wei_Dai, army1987, taw
comment by Wei Dai (Wei_Dai) · 2012-05-23T19:44:24.635Z · LW(p) · GW(p)

I think taw's point was that Newcomb's Problem has no practical applications, and would answer your question by saying that engineers are very interested in gravity. My answer to taw would be that Newcomb's Problem is just an abstraction of Prisoner's Dilemma, which is studied by economists, behavior biologists, evolutionary psychologists, and AI researchers.

Replies from: taw
comment by taw · 2012-05-23T19:49:11.907Z · LW(p) · GW(p)

Prisoner's Dilemma relies on causality, Newcomb's Paradox is anti-causality. They're as close to each other as astronomy and astrology.

Replies from: ArisKatsaris, Ezekiel, Wei_Dai, MugaSofer
comment by ArisKatsaris · 2012-05-24T08:58:10.775Z · LW(p) · GW(p)

Prisoner's Dilemma relies on causality, Newcomb's Paradox is anti-causality.

The contents of Newcomb's boxes are caused by the kind of agent you are -- which are (effectively by definition of what 'kind of agent' means) mapped directly to what decision you will take.

Newcomb's paradox can only be called anti-causality only in some confused anti-compatibilist sense in which determinism is opposed to free will and therefore "the kind of agent you are" must be opposed to "the decisions you make" -- instead of absolutely correlating to them.

comment by Ezekiel · 2012-05-23T22:51:42.968Z · LW(p) · GW(p)

In what way is Newcomb's Problem "anti-causality"?

If you don't like the superpowerful predictor, it works for human agents as well. Imagine you need to buy something but don't have cash on you, so you tell the shopkeeper you'll pay him tomorrow. If he thinks you're telling the truth, he'll give you the item now and let you come back tomorrow. If not, you lose a day's worth of use, and so some utility.

So your best bet (if you're selfish) is to tell him you'll pay tomorrow, take the item, and never come back. But what if you're a bad liar? Then you'll blush or stammer or whatever, and you won't get your good.

A regular Causal agent, however, having taken the item, will not come back the next day - and you know it, and it will show on your face. So in order to get what you want, you have to actually be the kind of person who respects their past selves decisions - a TDT agent, or a CDT agent with some pre-commitment system.

The above has the same attitude to causality as Newcomb's Problem - specifically, it includes another agent rewarding you based that agent's calculations of your future behaviour. But it's a situation I've been in several times.

EDIT: Grammar.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2012-05-24T07:22:37.128Z · LW(p) · GW(p)

This example is much like Parfit's Hitchhiker in less extreme form.

comment by Wei Dai (Wei_Dai) · 2012-05-24T17:00:30.622Z · LW(p) · GW(p)

I actually have some sympathy for your position that Prisoner's Dilemma is useful to study, but Newcomb's Paradox isn't. The way I would put it is, as the problems we study increase in abstraction from real world problems, there's the benefit of isolating particular difficulties and insights, and making it easier to make theoretical progress, but also the danger that the problems we pay attention to are no longer relevant to the actual problems we face. (See another recent comment of mine making a similar point.)

Given that we have little more than intuition to guide on us on "how much abstraction is too much?", it doesn't seems unreasonable for people to disagree on this topic and and pursue different approaches, as long as the the possibility of real-world irrelevance isn't completely overlooked.

comment by MugaSofer · 2012-12-26T01:14:13.511Z · LW(p) · GW(p)

Prisoner's Dilemma relies on causality, Newcomb's Paradox is anti-causality.

So, you consider this notion of "causality" more important than actually succeeding? If I showed up in a time machine, would you complain I was cheating?

Also, dammit, karma toll. Sorry, anyone who wants to answer me.

comment by A1987dM (army1987) · 2012-05-28T16:09:28.131Z · LW(p) · GW(p)

"Not counting physicists, where's this supposed interest in gravity?"

Engineering.

comment by taw · 2012-05-23T19:47:52.691Z · LW(p) · GW(p)

Philosophy contains some useful parts, but it also contains massive amounts of bullshit. Starting let's say here.

Decision theory is studied very seriously by mathematicians and others, and they don't care at all for Newcomb's Paradox.

comment by Douglas_Knight · 2012-05-23T20:06:14.626Z · LW(p) · GW(p)

Newcomb himself was not a philosopher.

I think Newcomb introduced it as a simplification of the prisoner's dilemma. The game theory party line is that you should 2-box and defect. But the same logic says that you should defect in iterated PD, if the number of rounds is known. This third problem is popular in academia, outside of philosophy. It is not so popular in game theory, but the game theorists admit that it is problematic.

comment by A1987dM (army1987) · 2012-05-28T16:08:44.735Z · LW(p) · GW(p)

Crackpot Decision Theories popular around here do not solve any real problem arising from laws of causality operating normally, so there's no point studying them seriously.

Yeah, assuming an universe where causality only goes forward in time and where your decision processes are completely hidden from outside, CDT works; but humans are not perfect liars, so they leak out information about the decision they're about to make before they start to consciously act upon it, so the assumptions of CDT are only approximately true, and in some cases TDT may return better results.

comment by [deleted] · 2012-05-28T16:34:37.501Z · LW(p) · GW(p)

CDT eats the donut "just this once" every time and gets fat. TDT says "I shouldn't eat donuts" and does not get fat.

Replies from: wedrifid, army1987
comment by wedrifid · 2012-05-28T17:03:19.295Z · LW(p) · GW(p)

TDT says "I shouldn't eat donuts" and does not get fat.

The deontological agent might say that. The TDT agent just decides "I will not eat this particular donut now" and it so happens that it would also to make decisions not to eat other donuts in similar circumstances.

The use of the term TDT or "timeless" is something that gets massively inflated to mean anything noble sounding. All because there is one class of contrived circumstance in which the difference between CDT and TDT is that TDT will cooperate.

Replies from: army1987, None
comment by A1987dM (army1987) · 2012-05-29T11:15:15.368Z · LW(p) · GW(p)

It might not be rigorous, but it's still a good analogy IMO. Akrasia can be seen as you and your future self playing a non-zero-sum game, which in some cases has PD-like payoffs.

comment by [deleted] · 2012-05-28T17:08:02.050Z · LW(p) · GW(p)

The TDT agent just decides "I will not eat this particular donut now" and it so happens that it would also to make decisions not to eat other donuts in similar circumstances.

right. I was being a bit messy with describing the TDT thought process. The point is that TDT considers all donut-decisions as a single decision.

comment by A1987dM (army1987) · 2012-05-29T11:15:54.612Z · LW(p) · GW(p)

You might want to link to http://lesswrong.com/lw/4sh/how_i_lost_100_pounds_using_tdt/.

Replies from: None
comment by [deleted] · 2012-05-29T17:56:26.552Z · LW(p) · GW(p)

Or I can just lazily allude to it and then upvote you for linking it.

Replies from: army1987
comment by A1987dM (army1987) · 2012-05-29T18:30:23.539Z · LW(p) · GW(p)

Yeah, I guessed that you were alluding to it, but I thought that people who hadn't read it wouldn't get the allusion.

comment by selylindi · 2012-05-24T15:25:39.404Z · LW(p) · GW(p)

Problem 2 reminds me strongly of playing GOPS.

For those who aren't familiar with it, here's a description of the game. Each player receives a complete suit of standard playing cards, ranked Ace low through King high. Another complete suit, the diamonds, is shuffled (or not, if you want a game of complete information) and put face down on the table; these diamonds have point values Ace=1 through King=13. In each trick, one diamond is flipped face-up. Each player then chooses one card from their own hand to bid for the face-up diamonds, and all bids are revealed simultaneously. Whoever bids highest wins the face-up diamonds, but if there is a tie for the highest bid (even when other players did not tie), then no one wins them and they remain on the table to be won along with the next trick. All bids are discarded after every trick.

Especially when the King comes up early, you can see everyone looking at each other trying to figure out how many levels deep to evaluate "What will the other players do?".

(1) Play my King to be likely to win. (2) Everyone else is likely to do (1) also, which will waste their Kings. So instead play low while they throw away their Kings. (3) If the players are paying attention, they might all realize they should (2), in which case I should play highest low card - the Queen. (4+) The 4th+ levels could repeat (2) and (3) mutatis mutandis until every card has been the optimal choice at some level. In practice, players immediately recognize the futility of that line of thought and instead shift to the question: How far down the chain of reasoning are the other players likely to go? And that tends to depend on knowing the people involved and the social context of the game.

Maybe playing GOPS should be added to the repertoire of difficult decision theory puzzles alongside the prisoner's dilemma, Newcomb's problem, Pascal's mugging, and the rest of that whole intriguing panoply. We've had a Prisoner's Dilemma competition here before - would anyone like to host a GOPS competition?

Replies from: shokwave
comment by shokwave · 2012-05-25T08:19:52.665Z · LW(p) · GW(p)

I'm going to play this game at LW meetups in future. Hopefully some insights will arise out of it.

I also think I might try to generalise this kind of problem, in the vein of trolley problems being a generalisation of some types of decisions and Parfit's Hitchhiker being a generalisation of precommittment-favouring situations.

comment by gRR · 2012-05-23T19:01:14.263Z · LW(p) · GW(p)

The problems look like a kind of an anti-Prisoner's Dilemma. An agent plays against an opponent, and gets a reward iff they played differently. Then any agent playing against itself is screwed.

comment by shokwave · 2012-05-23T10:28:41.713Z · LW(p) · GW(p)

The more I think about it, the more interesting these problems get! Problem 1 seems to re-introduce all the issues that CDT has on Newcomb's Problem, but for TDT. I first thought to introduce the ability to 'break' with past selves, but that doesn't actually help with the simulation problem.

It did lead to a cute observation, though. Given that TDT cares about all sufficiently accurate simulations of itself, it's actually winning.

  • It one-boxes in Problem 1; thus ensuring that its simulacrum one-boxed in Omega's pre-game simulation, so TDT walked away with $2,000,000 (whereas CDT, unable to derive utility from a simulation of TDT, walked away with $1,001,000.) This is proofed against increasing the value of the second box; TDT still gains at least 1 dollar more (when the second box is $999,999), and simply two-boxes when the second box is as or more valuable.
  • In Problem 2, it picks in such a way that Omega must run at least 10 trials and the game itself; this means 11 TDT agents have had a 10% shot at $1,000,000. With an expected value of $1,100,000 it is doing better than the CDT agents walking away with $1,000,000.

It doesn't seem very relevant, but I think if we explored Richard's point that we need to actually formalise this, we'd find that any simulation high-fidelity enough to actually bind a TDT agent to its previous actions would necessarily give the agent the utility from the simulations, and vice versa, any simulation not accurate enough to give utility would be sufficiently different from TDT to allow our agent to two-box when that agent one-boxed.

Replies from: None, None
comment by [deleted] · 2012-05-23T11:15:39.291Z · LW(p) · GW(p)

Omega doesn't need to simulate the agent actually getting the reward. After the agent has made its choice, the simulation can just end.

Replies from: army1987, kybernetikos, kybernetikos, shokwave
comment by A1987dM (army1987) · 2012-05-28T09:31:42.063Z · LW(p) · GW(p)

Omega is supposed to be always truthful, so either he rewards the sims as well, or you know something the sims don't and hence it's not obvious you'll do the same as them.

Replies from: None
comment by [deleted] · 2012-05-28T10:15:10.034Z · LW(p) · GW(p)

I thought Omega was allowed to lie to sims.

Even if he's not, after he's given a $1m simulated reward, does he then have to keep up a simulated environment for the sim to actually spend the money?

Replies from: army1987, wedrifid
comment by A1987dM (army1987) · 2012-05-28T11:14:57.479Z · LW(p) · GW(p)

If he can lie to sims, then you can't know he's not lying to you unless you know you're not a sim. If you do, it's not obvious you'd choose the same way as if you didn't.

Replies from: DanArmak
comment by DanArmak · 2012-05-28T15:18:33.293Z · LW(p) · GW(p)

For instance, if you think Omega is lying and completely ignore everything he says, you obviously two-box.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-28T15:32:18.283Z · LW(p) · GW(p)

Why not zero-box in this case? I mean, what reason would I have to expect any money at all?

Replies from: DanArmak
comment by DanArmak · 2012-05-28T16:02:27.013Z · LW(p) · GW(p)

Well, as long as you believe Omega enough to think no box contains sudden death or otherwise negative utility, you'd open them to see what was inside. But yes, you might not believe Omega at all.

General question: suppose we encounter an alien. We have no idea what its motivations, values, goals, or abilities are. On the other hand, if may have observed any amount of human comm traffic from wireless EM signals since the invention of radio, and from actual spy-probes before the human invention of high tech that would detect them.

It signals us in Morse code from its remote starship, offering mutually benefitial trade.

What prior should we have about the alien's intention? Should we use a native uniform prior that would tell us it's as likely to mean us good as harm, and so never reply because we don't know how it will try to influence our actions via communications? Should it tell us different agents who don't explicitly value one another will conflict to the extent their values differ, and so since value-space is vast and a randomly selected alien is unlikely to share many values with us, we should prepare for war? Should it tell us we can make some assumptions (which?) about naturally evolved agents or their Friendly-to-themselves creations? How safe are we if we try to "just read" English text written by an unknown, possibly-superintelligence which may have observed all our broadcast traffic since the age of radio? What does our non-detection of this alien civ until they chose to initiate contact tell us? Etc.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-28T17:00:11.309Z · LW(p) · GW(p)

A 50% chance of meaning us good vs harm isn't a prior I find terribly compelling.

There's a lot to say here, but my short answer is that this is both an incredibly dangerous and incredibly valuable situation, in which both the potential opportunity costs and the potential actual costs are literally astronomical, and in which there are very few things I can legitimately be confident of.

The best I can do in such a situation is to accept that my best guess is overwhelmingly likely to be wrong, but that it's slightly less likely to be wrong than my second-best guess, so I should operate on the basis of my best guess despite expecting it to be wrong. Where "best guess" here is the thing I consider most likely to be true, not the thing with the highest expected value.

I should also note that my priors about aliens in general -- that is, what I consider likely about a randomly selected alien intelligence -- are less relevant to this scenario than what I consider likely about this particular intelligence, given that it has observed us for long enough to learn our language, revealed itself to us, communicated with us in Morse code, offered mutually beneficial trade, etc.

The most tempting belief for me is that the alien's intentions are essentially similar to ours. I can even construct a plausible sounding argument for that as my best guess... we're the only other species I know capable of communicating the desire for mutually beneficial trade in an artificial signalling system, so our behavior constitutes strong evidence for their behavior. OTOH, it's pretty clear to me that the reason I'm tempted to believe that is because I can do something with that belief; it gives me a lot of traction for thinking about what to do next. (In a nutshell, I would conclude from that assumption that it means to exploit us for its long-term benefit, and whether that's good or bad for us depends entirely on what our most valuable-to-it resources are and how it can most easily obtain them and whether we benefit from that process.) Since that has almost nothing to do with the likelihood of it being true, I should distrust my desire to believe that.

Ultimately, I think what I do is reply that I value mutually beneficial trade with them, but that I don't actually trust them and must therefore treat them as a potential threat until I have gathered more information about them, while at the same time refraining from doing anything that would significantly reduce our chances of engaging in mutually beneficial trade in the future, and what do they think about all that?

comment by wedrifid · 2012-05-28T14:02:58.333Z · LW(p) · GW(p)

I thought Omega was allowed to lie to sims.

He can certainly give them counterfactual 'realities'. It would seem that he should be assumed to at least provide counterfactual realities wherein information provided by the simulation's representation of Omega indicates that he is perfectly trustworthy.

Even if he's not, after he's given a $1m simulated reward, does he then have to keep up a simulated environment for the sim to actually spend the money?

No. But if for whatever reason the simulated environment persists it should be one that is consistent with Omega keeping his word. Or, if part of the specification of the problem or the declarations made by Omega directly pertain to claims about what He will do regarding simulation then he will implement that policy.

comment by kybernetikos · 2012-06-06T11:51:02.463Z · LW(p) · GW(p)

Omega (who experience has shown is always truthful)

Omega doesn't need to simulate the agent actually getting the reward. After the agent has made its choice, the simulation can just end.

If we are assuming that Omega is trustworthy, then Omega needs to be assumed to be trustworthy in the simulation too. If they didn't allow the simulated version of the agent to enjoy the fruits of their choice, then they would not be trustworthy.

comment by kybernetikos · 2012-06-01T22:01:33.243Z · LW(p) · GW(p)

Actually, I'm not sure this matters. If the simulated agent knows he's not getting a reward, he'd still want to choose so that the nonsimulated version of himself gets the best reward.

So the problem is that the best answer is unavailable to the simulated agent: in the simulation you should one box and in the 'real' problem you'd like to two box, but you have no way of knowing whether you're in the simulation or the real problem.

Agents that Omega didn't simulate don't have the problem of worrying whether they're making the decision in a simulation or not, so two boxing is the correct answer for them.

The decisions being made are very different between an agent that has to make the decision twice and the first decision will affect the payoff of the second versus an agent that has to make the decision only once, so I think that in reality perhaps the problem does collapse down to an 'unfair' one because the TDT agent is presented with an essentially different problem to a nonTDT agent.

comment by shokwave · 2012-05-24T02:28:17.348Z · LW(p) · GW(p)

Then the simulated TDT agent will one-box in Problem 1 so that the real TDT agent can two-box and get $1,001,000. The simulated TDT agent will pick a box randomy with a uniform distribution in Problem 2, so that the real TDT agent can select box 1 like CDT would.

(If the agent is not receiving any reward, it will act in a way that maximises the reward agents sufficiently similar to it would receive. In this situation of 'you get no reward', CDT would be completely indifferent and could not be relied upon to set up a good situation for future actual CDT agents.)

Of course, this doesn't work if the simulated TDT agent is not aware that it won't receive a reward. This strays pretty close to "Omega is all-powerful and out to make sure you lose"-type problems.

Replies from: JGWeissman
comment by JGWeissman · 2012-05-24T03:18:00.177Z · LW(p) · GW(p)

Of course, this doesn't work if the simulated TDT agent is not aware that it won't receive a reward.

The simulated TDT agent is not aware that it won't receive a reward, and therefore it does not work.

This strays pretty close to "Omega is all-powerful and out to make sure you lose"-type problems.

Yeah, it doesn't seem right to me that the decision theory being tested is used in the setup of the problem. But I don't think that the ability to simulate without rewarding the simulation is what pushes it over the threshold of "unfair".

Replies from: kybernetikos, bogus, shokwave
comment by kybernetikos · 2012-06-06T11:54:52.360Z · LW(p) · GW(p)

I don't think that the ability to simulate without rewarding the simulation is what pushes it over the threshold of "unfair".

It only seems that way because you're thinking from the non-simulated agents point of view. How do you think you'd feel if you were a simulated agent, and after you made your decision Omega said 'Ok, cheers for solving that complicated puzzle, I'm shutting this reality down now because you were just a simulation I needed to set a problem in another reality'. That sounds pretty unfair to me. Wouldn't you be saying 'give me my money you cheating scum'?

And as has been already pointed out, they're very different problems. If Omega actually is trustworthy, integrating across all the simulations gives infinite utility for all the (simulated) TDT agents and a total $1001000 utility for the (supposedly non-simulated) CDT agent.

Replies from: JGWeissman
comment by JGWeissman · 2012-06-06T16:20:02.968Z · LW(p) · GW(p)

It only seems that way because you're thinking from the non-simulated agents point of view. How do you think you'd feel if you were a simulated agent, and after you made your decision Omega said 'Ok, cheers for solving that complicated puzzle, I'm shutting this reality down now because you were just a simulation I needed to set a problem in another reality'. That sounds pretty unfair to me. Wouldn't you be saying 'give me my money you cheating scum'?

We were discussing if it is a "fair" test of the decision theory, not if it provides a "fair" experience to any people/agents that are instantiated within the scenario.

And as has been already pointed out, they're very different problems. If Omega actually is trustworthy, integrating across all the simulations gives infinite utility for all the (simulated) TDT agents and a total $1001000 utility for the (supposedly non-simulated) CDT agent.

I am aware that they are different problems. That is why the version of the problem in which simulated agents get utility that the real agent cares about does nothing to address the criticism of TDT that it loses in the version where simulated agents get no utility. Postulating the former in response to the latter was a fail in using the Least Convenient Possible World.

The complaints about Omega being untrustworthy are weak. Just reformulate the problem so Omega says to all agents, simulated or otherwise, "You are participating in a game that involves simulated agents and you may or may not be one of the simulated agents yourself. The agents involved in the game are the following: <describes agents' roles in third person>".

Replies from: kybernetikos
comment by kybernetikos · 2012-06-19T07:46:29.046Z · LW(p) · GW(p)

The complaints about Omega being untrustworthy are weak. Just reformulate the problem so Omega says to all agents, simulated or otherwise, "You are participating in a game that involves simulated agents and you may or may not be one of the simulated agents yourself. The agents involved in the game are the following: <describes agents' roles in third person>".

Good point.

That clears up the summing utility across possible worlds possibility, but it still doesn't address the fact that the TDT agent is being asked to (potentially) make two decisions while the non-TDT agent is being asked to make only one. That seems to me to make the scenario unfair (it's what I was trying to get at in the 'very different problems' statement).

comment by bogus · 2012-05-26T21:44:38.220Z · LW(p) · GW(p)

The simulated TDT agent is not aware that it won't receive a reward, and therefore it does not work.

This raises an interesting problem, actually. Omega could pose the following question:

Here are two boxes, A and B; you may choose either box, or take both. You are in one of two states of nature, with equal probability: one possibility is that you're in a simulation, in which case you will receive no reward, no matter what you choose. The other possibility is that a simulation of this problem was presented to an agent running TDT. I won't tell you what the agent decided, but I will tell you that if the agent two-boxed then I put nothing in Box B, whereas if the agent one-boxed then I put $1 million in Box B. Regardless of how the simulated agent decided, I put $1000 in Box A. Now please make your choice.

The solution for a TDT agent seems to be choosing box B, but there may be similar games where it makes sense to run a mixed strategy. I don't think that it makes much sense to rule out the possibility of running mixed strategies across simulations, because in most models of credible precommitment the other players do not have this kind of foresight (although Omega possibly does).

And yes, it is still the case that a CDT agent can outperform TDT, as long as the TDT agent knows that if she is in a simulation, her choice will influence a real game played by a TDT, with some probability. Nevertheless, as the probability of "leaking" to CDT increases, it does become more profitable (AIUI) for TDT to two-box with low probability.

comment by shokwave · 2012-05-25T08:16:27.564Z · LW(p) · GW(p)

The simulated TDT agent is not aware that it won't receive a reward, and therefore it does not work. ... I don't think that the ability to simulate without rewarding the simulation is what pushes it over the threshold of "unfair".

I do agree. I think my previous post was still exploring the "can TDT break with a simulation of itself?" question, which is interesting but orthogonal.

comment by [deleted] · 2012-05-23T11:34:34.437Z · LW(p) · GW(p)

Corollary: Omega can statically analyse the TDT agent's decision algorithm.

comment by Richard_Kennaway · 2012-05-23T09:38:41.854Z · LW(p) · GW(p)

"Before you entered the room, I ran a simulation of this problem as presented to an agent running TDT. ...."

This needs some serious mathematics underneath it. Omega is supposed to run a simulation of how an agent of a certain sort handled a certain problem, the result of that simulation being a part of the problem itself. I don't think it's possible to tell, just from these English words, that there is a solution to this fixed-point formulation. And TDT itself hasn't been formalised, although I assume there are people (Eliezer? Marcello? Wei Dai?) working on that.

Cf. the construction of Gödel sentences: you can't just assume that a proof-system can talk about itself, you have to explicitly construct a way for it to talk about itself and show precisely what "talking about itself" means, before you can do all the cool stuff about undecidable sentences, Löb's theorem, and so on.

Replies from: Eliezer_Yudkowsky, ciphergoth
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-05-23T19:57:46.427Z · LW(p) · GW(p)

This seems well-specified to me: Since the agent is not told its own output in advance, it is possible to run the "simulation" and the "real version" in finite time. If you hand me a computer program that is the agent, I will hand you a computer program that is Omega and the environment.

Replies from: Pentashagon
comment by Pentashagon · 2012-05-29T18:18:50.245Z · LW(p) · GW(p)

Here is a useful program:

if (being_simulated()) pick_box_b(); else pick_both_boxes();

being_simulated() is an interesting function; it would presumably consist of things like comparing star maps across as broad a range of EM frequencies as possible with historical knowledge, the positions of planetary bodies, Earthly satellites, quasar signatures, and inputs of other individuals and networks. It could also be as simple as "if (get_current_average_cosmic_ray_bit_flips() != historical_bit_flips_average) return true;" Since my brain tries to play Omega with me every night (well, on the nights I get good sleep) I think we all have some practical experience with being_simulated(). While our brains are nowhere near as capable as Omega they are fairly good at suspending our own disbelief during dreams . What generally breaks disbelief is trying to behave rationally, including observing/testing the environment which in reality would be the outside world but in a dream is simulated. Ultimately Omega is forced to simulate a very large subset of the Universe in order to cause being_simulated() to return false, or to alter the program itself so that being_simulated vacuously returns true.

To prevent the latter attack the program should execute a fully homomorphically encrypted sub-function and base all its decisions on the outputs of that function. In other words its true decision theory should be hidden even from the hardware running it. This pushes Omega's capabilities into the cryptographic realm and the ability to solve arbitrarily hard cryptography problems. Once this is done, Omega becomes an oracle for solving any hard problem, e.g. by running this function as an input to the final decision theory:

if (strange_paradoxical_decision_is_required()) { bank_secret = factor_omegas_bank_account_public_key(); return sign_and_cash_check_for_one_billion_dollars(bank_secret); }

This restricts Omega to putting a time limit (or computation limit) on its simulations to avoid being suckered by its own computing power. Presumably Omega also puts a time limit on the real time a being can spend thinking before choosing which boxes it wants, but presumably that time is long enough for some fairly serious philosophizing and likely more than enough time for a reliable being_simulated() call.

Another approach is to consider how much information an intelligent being can gather about its environment. A human would have trouble determining whether two boxes are empty or full, but additional sensors could detect the presence of money in the boxes fairly reliably. What matters for Omega is making sure that a being cannot determine the contents of the boxes before picking them. From the perspective of a rational being this is equivalent to the boxes being filled with cash after making a decision. If Omega has the capability to obscure the contents of boxes then Omega certainly has the ability to obscure the placement of money into the boxes as they are chosen (just a glorified magic trick). Given that interpretation, CDT will one-box.

EDIT: I apologize for the formatting, I am not very good at escaping/formatting apparently.

Replies from: drnickbone
comment by drnickbone · 2012-05-30T06:43:09.931Z · LW(p) · GW(p)

if (being_simulated()) pick_box_b(); else pick_both_boxes()

This strategy is discussed in the follow-up article.

In general it's difficult, because by assumption Omega has the computational power to simulate more or less anything (including an environment matching the world as you remember it; this might be like the real world, or you might have spent your whole life so far as a sim). And the usual environment for these problems is a sealed room, so that you can't look at the stars etc.

comment by Paul Crowley (ciphergoth) · 2012-05-23T12:06:58.654Z · LW(p) · GW(p)

But TDT already has this problem - TDT is all about finding a fixed point decision.

comment by Paul Crowley (ciphergoth) · 2012-05-23T08:57:03.459Z · LW(p) · GW(p)

I think it's right to say that these aren't really "fair" problems, but they are unfair in a very interesting new way that Eliezer's definition of fairness doesn't cover, and it's not at all clear that it's possible to come up with a nice new definition that avoids this class of problem. They remind me of "Lucas cannot consistently assert this sentence".

comment by A1987dM (army1987) · 2012-05-28T09:08:53.926Z · LW(p) · GW(p)

Omega (who experience has shown is always truthful) presents the usual two boxes A and B and announces the following. "Before you entered the room, I ran a simulation of this problem as presented to an agent running TDT.

If he's always truthful, then he didn't lie to the simulation either and this means that he did infinitely many simulations before that. So assume he says "Either before you entered the room I ran a simulation of this problem as presented to an agent running TDT, or you are such a simulation yourself and I'm going to present this problem to the real you afterwards", or something similar. If he says different things to you and to your simulation instead, then it's not obvious you'll give the same answer.

Are these really "fair" problems? Is there some intelligible sense in which they are not fair, but Newcomb's problem is fair?

Well, a TDT agent has indexical uncertainty about whether or not they're in the simulation, whereas a CDT or EDT agent doesn't. But I haven't thought this through yet, so it might turn out to be irrelevant.

Replies from: private_messaging, drnickbone, DanArmak, MugaSofer
comment by private_messaging · 2012-05-28T21:10:14.274Z · LW(p) · GW(p)

So assume he says "Either before you entered the room I ran a simulation of this problem as presented to an agent running TDT, or you are such a simulation yourself and I'm going to present this problem to the real you afterwards", or something similar.

...

Well, a TDT agent has indexical uncertainty about whether or not they're in the simulation, whereas a CDT or EDT agent doesn't.

Say, you have CDT agent in the world, affecting the world via set of robotic hands, robotic voice, and so on. If you wire up two robot bodies to 1 computer (in parallel so that all movements are done by both bodies), that is just somewhat peculiar robotic manipulator. Handling this doesn't require any changes to CDT.

Likewise when you have two robot bodies controlled by identical mathematical equation, provided that your world model in the CDT utility calculation accounts for all the known manipulators which are controlled by the chosen action, you get correct result.

Likewise, you can have CDT control a multitude of robots, either from one computer, or from multiple computers that independently determine optimal, identical actions (but each computer only act on a robot body assigned to that computer)

The CDT is formally defined using mathematics; the mathematics is already 'timeless', and the fact that the chosen action affects the contents of the boxes is a part of world model not decision theory (and so is the physical time and physical causality a part of world model not the decision theory. Even though the decision theory is called causal, that's some other 'causal').

comment by drnickbone · 2012-05-28T18:57:02.458Z · LW(p) · GW(p)

This question of "Does Omega lie to sims?" was already discussed earlier in the thread. There were several possible answers from cousin_it and myself, any of which will do.

comment by DanArmak · 2012-05-28T15:01:53.161Z · LW(p) · GW(p)

He can't have done literally infinitely many simulations. If that is really required it would be a way out by saying the thought experiment stipulates an impossible situation. I haven't yet considered whether the problem can be changed to give the same result and not require infinitely many simulations.

ETA: no wait, that can't be right, because it would apply to the original Newcomb's problem too. So there must be a way to formalize this correctly. I'll have to look it up but don't have the time right now.

Replies from: army1987, MugaSofer
comment by A1987dM (army1987) · 2012-05-28T16:03:14.379Z · LW(p) · GW(p)

In the original Newcomb's problem it's not specified that Omega performs simulations -- for all we know, he might use magic, closed timelike curves, or quantum magic whereby Box A is in a superposition of states entangled with your mind whereby if you open Box B, A ends up being empty and if you hand B back to Omega, A ends up being full.

Replies from: DanArmak
comment by DanArmak · 2012-05-28T16:26:18.047Z · LW(p) · GW(p)

We should take this seriously: a problem that cannot be instantiated in the physical world should not affect our choice of decision theory.

Before I dig myself in deeper, what does existing wisdom say? What is a practical possible way of implementing Newcomb's problem? For instance, simulation is eminently practical as long as Omega knows enough about the agent being simulated. OTOH, macro quantum enganglement of an arbitrary agent's arbitrary physical instantiation with a box prepared by Omega doesn't sound practical to me, but maybe I'm just swayed by increduilty. What do the experts say? (Including you if you're an expert, obviously.)

Replies from: army1987
comment by A1987dM (army1987) · 2012-05-28T16:37:15.448Z · LW(p) · GW(p)

cannot

0 is not a probability, and even tiny probabilities can give rise to Pascal's mugging.

Unless your utility function is bounded.

Replies from: wedrifid, DanArmak
comment by wedrifid · 2012-05-28T16:58:12.275Z · LW(p) · GW(p)

0 is not a probability, and even tiny probabilities can give rise to Pascal's mugging.

Even? I'd go as far as to say only. Non-tiny probabilities aren't Pascal's muggings. They are just expected utility calculations.

comment by DanArmak · 2012-05-28T17:02:37.755Z · LW(p) · GW(p)

If a problem statement has an internal logical contradiction, there is still a tiny probability that I and everyone else are getting it wrong, due to corrupted hardware or a common misconception about logic or pure chance, and the problem can still be instantiated. But it's so small that I shouldn't give it preferential consideration over other things I might be wrong about, like the nonexistence of a punishing god or that the food I'm served at the restaraunt today is poisoned.

Either of those if true could trump any other (actual) considerations in my actual utility function. The first would make me obey religious strictures to get to heaven. The second threatens death if I eat the food. But I ignore both due to symmetry in the first case (the way to defeat Pascal's wager in general) and to trusting my estimation of the probability of the danger in the second (ordinary expected utility reasoning).

AFAICS both apply to considering an apparently self-contradictory problem statement as really not possible with effective probability zero. I might be misunderstanding things so much that it really is possible, but I might also be misunderstanding things so much that the book I read yesterday about the history of Africa really contained a fascinating new decision theory I must adopt or be doomed by Omega.

All this seems to me to fail due to standard reasoning about Pascal's mugging. What am I missing?

Replies from: army1987
comment by A1987dM (army1987) · 2012-05-28T18:16:50.495Z · LW(p) · GW(p)

If a problem statement has an internal logical contradiction

AFAIK Newcomb's dilemma does not logically contradict itself, it just contradict the physical law that causality cannot go backwards in time.

Replies from: wedrifid
comment by wedrifid · 2012-05-28T18:23:57.686Z · LW(p) · GW(p)

AFAIK Newcomb's dilemma does not logically contradict itself, it just contradict the physical law that causality cannot go backwards in time.

It certainly doesn't contradict itself, and I would also assert that it doesn't contradict the physical law that causality cannot go backwards in time. Instead I would say that giving the sane answer to Newcomb's problem requires abanding the assumption that one's decision must be based only on what it affects based on forward in time causal, physical influence.

Replies from: private_messaging
comment by private_messaging · 2012-05-28T19:46:14.948Z · LW(p) · GW(p)

Consider making both boxes transparent to illustrate some related issue.

comment by MugaSofer · 2012-12-25T15:57:35.934Z · LW(p) · GW(p)

If that is really required it would be a way out by saying the thought experiment stipulates an impossible situation.

This might be better stated as "incoherent", as opposed to mere impossibility which can be resolved with magic.

comment by MugaSofer · 2012-12-25T15:54:57.332Z · LW(p) · GW(p)

I assumed the sims weren't conscious - they were abstract implementations of TDT.

Replies from: army1987
comment by A1987dM (army1987) · 2012-12-25T17:59:29.988Z · LW(p) · GW(p)

Well, then there's stuff you know and the sims don't, which you could take in account when deciding and thence decide something different from what they did.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-25T22:26:34.688Z · LW(p) · GW(p)

What stuff? The color of the walls? Memories of your childhood? Unless you have information that alters your decision or you're not a perfect implementer of TDT, in which case you get lumped into the category of "CDT, EDT etc."

Replies from: army1987
comment by A1987dM (army1987) · 2012-12-25T23:47:36.162Z · LW(p) · GW(p)

The fact that you're not a sim, and unlike the sims you'll actually be given the money.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-26T01:38:30.604Z · LW(p) · GW(p)

Why the hell would Omega program the sim not to value the simulated reward? It's almost certainly just abstract utility anyway.

comment by cousin_it · 2012-05-23T08:45:05.178Z · LW(p) · GW(p)

Thanks for the post! Your problems look a little similar to Wei's 2TDT-1CDT, but much simpler. Not sure about the other decision theory folks, but I'm quite puzzled by these problems and don't see any good answer yet.

Replies from: drnickbone, drnickbone
comment by drnickbone · 2012-05-23T17:11:22.567Z · LW(p) · GW(p)

I've looked a bit at that thread, and the related follow-ups, and my head is now really spinning. You are correct that my problems were simpler!

My immediate best guess on 2TDT-1CDT is that the human player would do better to submit a simple defect-bot (rather than either CDT or TDT), and this is irrespective of whether the player themselves is running TDT or CDT. If the player has to submit his/her own decision algorithm (source-code) instead of a bot, then we get into a colossal tangle about "who defects first", "whose decision is logically prior to whose" and whether the TDT agents will threaten to defect if they detect that the submitted agent may defect, or has already self-modified into unconditionally defecting, or if the TDT agents will just defect unconditionally anyway to even the score (e.g. through some form of utility trading / long term consequentialism principle that TDT has to beat CDT in the long run, therefore it had better just get on and beat CDT wherever possible...)

In short, I observe I am confused.

With all this logical priority vs temporal priority, and long term consequences feeding into short-term utilities, I'm reminded of the following from HPMOR Chapter 61:

There was a narrowly circulated proverb to the effect that only one Auror in thirty was qualified to investigate cases involving Time-Turners; and that of those few, the half who weren't already insane, soon would be.

comment by drnickbone · 2012-05-23T08:53:45.479Z · LW(p) · GW(p)

Thanks for this, and for the reference. I'll have a look at 2TDT-1CDT to see if there are any insights there which could resolve these problems. I've got a couple of ideas myself, but will check up on the other work.

Replies from: orthonormal
comment by Jack · 2012-05-24T00:17:00.852Z · LW(p) · GW(p)

Can someone answer the following: Say someone implemented an AGI using CDT. What exactly would go wrong that a better decision theory would fix?

Replies from: Manfred, army1987
comment by Manfred · 2012-05-24T19:38:35.920Z · LW(p) · GW(p)

It will defect on all prisoners dilemmas, even if they're iterated. So, for example, if we'd left it in charge of our nuclear arsenal during the cold war, it would have launched missiles as fast as possible.

But I think the main motivation was that, when given the option to self-modify, a CDT agent will self-modify as a method of precommittment - CDT isn't "reflectively consistent." And so if you want to predict an AI's behavior, if you predict based on CDT with no self-modification you'll get it wrong, since it doesn't stay CDT. Instead, you should try to find out what the AI wants to self-modify to, and predict based on that.

Replies from: drnickbone, army1987, DanielLC, Jack
comment by drnickbone · 2012-05-25T11:21:12.762Z · LW(p) · GW(p)

A more correct analysis is that CDT defects against itself in iterated Prisoner's Dilemma, provided there is any finite bound to the number of iterations. So two CDTs in charge of nuclear weapons would reason "Hmm, the sun's going to go Red Giant at some point, and even if we escape that, there's still that Heat Death to worry about. Looks like an upper bound to me". And then they'd immediately nuke each other.

A CDT playing against a "RevengeBot" - if you nuke it, it nukes back with an all out strike - would never fire its weapons. But then the RevengeBot could just take out one city at a time, without fear of retaliation.

Since CDT was the "gold standard" of rationality developed during the time of the Cold War, I am somewhat puzzled why we're still here.

Replies from: Manfred, wedrifid
comment by Manfred · 2012-05-25T11:30:47.946Z · LW(p) · GW(p)

Well, it's good that you're puzzled, because it wasn't - see Schelling's "The Strategy of Conflict."

Replies from: drnickbone
comment by drnickbone · 2012-05-25T11:52:25.865Z · LW(p) · GW(p)

I get the point that a CDT would pre-commit to retaliation if it had time (i.e. self-modify into a RevengeBot).

The more interesting question is why it bothers to do that re-wiring when it is expecting the nukes from the other side any second now...

comment by wedrifid · 2012-05-26T02:31:27.069Z · LW(p) · GW(p)

So two CDTs in charge of nuclear weapons would reason "Hmm, the sun's going to go Red Giant at some point, and even if we escape that, there's still that Heat Death to worry about. Looks like an upper bound to me". And then they'd immediately nuke each other.

This assumes that the mutual possession of nuclear weapons constitutes a prisoners dilemma. There isn't necessarily a positive payoff to nuking folks. (You know, unless they are really jerks!)

Replies from: drnickbone
comment by drnickbone · 2012-05-26T06:57:12.492Z · LW(p) · GW(p)

Well nuking the other side eliminates the chance that they'll ever nuke you (or will attack with conventional weapons), so there is arguably a slight positive for nuking first as opposed to keeping the peace.

There were some very serious thinkers arguing for a first strike against the Soviet Union immediately after WW2, including (on some readings) Bertrand Russell, who later became a leader of CND. And a pure CDT (with selfish utility) would have done so. I don't see how Schelling theory could have modified that... just push the other guy over the cliff before the ankle-chains get fastened.

Probably the reason it didn't happen was the rather obvious "we don't want to go down in history as even worse than the Nazis" - also there was complacency about how far behind the Soviets actually were. If it had been known that they would explode an A-bomb as little as 4 years after the war, then the calculation would have been different. (Last ditch talks to ban nuclear weapons completely and verifiably - by thorough spying on each other - or bombs away. More likely bombs away I think.)

comment by A1987dM (army1987) · 2012-05-29T09:37:18.853Z · LW(p) · GW(p)

It will defect on all prisoners dilemmas, even if they're iterated. So, for example, if we'd left it in charge of our nuclear arsenal during the cold war, it would have launched missiles as fast as possible.

I don't think MAD is a prisoner dilemma: in the prisoner dilemma, if I know you're going to cooperate no matter what, I'm better off defecting, and if I know you're going to defect no matter what, I'm better off defecting. This doesn't seem to be the case here: bombing you doesn't make me better off all things being equal, it just makes you worse off. If anything, it's a game of Chicken where bombing the opponent corresponds to going straight and not bombing them corresponds to swerving. And CDTists don't always go straight in Chicken, do they?

Replies from: Manfred
comment by Manfred · 2012-05-29T11:19:15.878Z · LW(p) · GW(p)

Hm, I disagree - if nuking the Great Enemy never made you any better off, why was anyone ever afraid of anyone getting nuked in the first place? It might not grow your crops for you or buy you a TV, but gains in security and world power are probably enough incentive to at least make people worry.

Replies from: army1987
comment by A1987dM (army1987) · 2012-05-29T11:24:08.216Z · LW(p) · GW(p)

Still better modelled by Chicken (where the utility of winning is assumed to be much smaller than the negative of the utility of dying, but still non-zero) than by PD.

(edited to add a link)

Replies from: Manfred
comment by Manfred · 2012-05-30T05:00:37.840Z · LW(p) · GW(p)

I don't understand what you mean by "modeled better by chicken" here.

Replies from: Nornagest
comment by Nornagest · 2012-05-30T05:48:16.509Z · LW(p) · GW(p)

I expect army1987's talking about Chicken), the game of machismo in which participants rush headlong at each other in cars or other fast-moving dangerous objects and whoever swerves first loses. The payoff matrix doesn't resemble the Prisoner's Dilemma all that much: there's more than one Nash equilibrium, and by far the worst outcome from either player's perspective occurs when both players play the move analogous to defection (i.e. don't swerve). It's probably most interesting as a vehicle for examining precommitment tactics.

The game-theoretic version of Chicken has often been applied to MAD, as the Wikipedia page mentions.

Replies from: army1987
comment by A1987dM (army1987) · 2012-05-30T10:22:06.774Z · LW(p) · GW(p)

I was. I should have linked to it, and I have now.

comment by DanielLC · 2012-05-25T07:17:14.038Z · LW(p) · GW(p)

even if they're iterated.

That doesn't seem right. Defecting causes the opponent to defect next time. It's a bad idea with any decision theory.

Instead, you should try to find out what the AI wants to self-modify to, and predict based on that.

It won't self-modify to TDT. It will self-modify to something similar, but using its beliefs at the time of modification as the priors. For example, it will use the doomsday argument immediately to find out how long the world is likely to last, and it will use that information from then on, rather than redoing it as its future self (getting a different answer).

Replies from: Manfred, shokwave
comment by Manfred · 2012-05-25T08:54:54.329Z · LW(p) · GW(p)

That doesn't seem right. Defecting causes the opponent to defect next time. It's a bad idea with any decision theory.

Fair enough. I guess I had some special case stuff in mind - there are certainly ways to get a CDT agent to cooperate on prisoner's dilemma ish problems.

comment by shokwave · 2012-05-25T08:21:03.287Z · LW(p) · GW(p)

That doesn't seem right. Defecting causes the opponent to defect next time. It's a bad idea with any decision theory.

Reason backwards from the inevitable end of the iteration. Defecting makes sense there, so defecting one turn earlier makes sense, so one turn earlier...

Replies from: DanielLC
comment by DanielLC · 2012-05-25T19:17:56.407Z · LW(p) · GW(p)

That depends on if it's known what the last iteration will be.

Also, I think any deviation from CDT in common knowledge (such as if you're not sure that they're sure that you're sure that they're a perfect CDT) would result in defecting a finite, and small, number of iterations from the end.

comment by Jack · 2012-05-24T19:41:55.203Z · LW(p) · GW(p)

Ah, that second paragraph makes perfect sense. Thanks.

comment by A1987dM (army1987) · 2012-05-28T09:14:09.397Z · LW(p) · GW(p)

I think TDT reduces to CDT if there's no other agent with similar or greater intelligence than you around. (You also mustn't have any dynamical inconsistency such as akrasia, otherwise your future and past selves count as ‘other’ as well.) So I don't think it'd make much of a difference for a singleton -- but I'd rather use an RDT just in case.

Replies from: wedrifid
comment by wedrifid · 2012-05-28T14:27:21.139Z · LW(p) · GW(p)

I think TDT reduces to CDT if there's no other agent with similar or greater intelligence than you around.

It isn't the absolute level of intelligence that is required, but rather that the other agent is capable of making a specific kind of reasoning. Even this can be relaxed to things that can only dubiously be said to qualify as being classed "agent". The requirement is that some aspect of the environment has (utility-relevant) behavior that is entangled with the output of the decision to be made in a way that is other than a forward in time causal influence. This almost always implies that some agent is involved but that need not necessarily be the case.

Caveat: Maybe TDT is dumber than I remember and artificially limits itself in a way that is relevant here. I'm more comfortable making assertions about what a correct decision theory would do than about what some specific attempt to specify a decision theory would do.

but I'd rather use an RDT just in case.

You make me happy! RDT!

comment by Paul Crowley (ciphergoth) · 2012-05-23T11:14:04.525Z · LW(p) · GW(p)

There's a different version of these problems for each decision theory, depending on what Omega simulates. For CDT, all agents two-box and all agents get $1000. However, on problem 2, it seems like CDT doesn't have a well-defined decision at all; the effort to work out what Omega's simulator will say won't terminate.

(I'm spamming this post with comments - sorry!)

Replies from: drnickbone
comment by drnickbone · 2012-05-23T12:16:59.202Z · LW(p) · GW(p)

You raise an interesting question here - what would CDT do if a CDT agent were in the simulation?

It looks to me that CDT just doesn't have the conceptual machinery to handle this problem properly, so I don't really know. One thing that could happen is that the simulated CDT agent tries to simulate itself and gets stuck in an infinite loop. I didn't specify exactly what would happen in that case, but if Omega can prove that the simulated agent is caught in a loop, then it knows the sim will choose each box with probability zero, and so (since these are all equal), it will fill box 1. But now can a real-life CDT agent also work this out, and beat the game by selecting box 1. But if so, why won't the sim do that, and so on? Aargh !!!

Another thought I had is that CDT could try tossing a logical coin, like computing the googleth digit of pi, and if it is even choose box 1, whereas if it is odd, choose box 2. If it runs out of time before computing (which the real-life agent will do), then it just picks box 1 or 2 with equal probability. The simulated CDT agent will however get to the end of the computation (Omega has arbitrary computational resources) and definitely pick 1 or 2 with certainty, so the money is definitely in one of those two boxes, which looks like the probability of the actual agent winning is raised to 50%. TDT might do the same.

However this looks like cheating to me, for both CDT and TDT.

EDIT: On reflection, it seems clear that CDT would never do anything "creatively sneaky" like tossing a logical coin; but it is the sort of approach that TDT (or some variant thereof) might come up with. Though I still think it's cheating.

Replies from: ciphergoth, orthonormal
comment by Paul Crowley (ciphergoth) · 2012-05-23T15:34:14.862Z · LW(p) · GW(p)

I don't think your "detect infinite resources and cheat" strategy is really worth thinking about. Instead of strategies like CDT and TDT whose applicability to limited compute resources is unclear, suppose you have an anytime strategy X, which you can halt at any time and get a decision. Then there's really a family of algorithms X-t, where t is the time you're going to give it to run. In this case, if you are X-t, we can consider the situation where Omega fields X-t against you.

comment by orthonormal · 2012-05-23T15:34:15.340Z · LW(p) · GW(p)

The version of CDT that I described explicitly should arrive at the uniformly random solution. You don't have to be able to simulate a program all the way through, just able to prove things about its output.

EDIT: Wait, this is wrong. It won't be able to consistently derive an answer, because of the way it acts given such an answer, and so it will go with whatever its default Nash equilibrium is.

Replies from: drnickbone
comment by drnickbone · 2012-05-23T15:58:16.213Z · LW(p) · GW(p)

Re: your EDIT. Yes, I've had that sort of reaction a couple of times today!

I'm shifting around between "CDT should pick at random, no CDT should pick Box 1, no CDT should use a logical coin, no CDT should pick it's favourite number in the set {1, 2} with probability 1, and hope that the version in the sim has a different favourite number, no, CDT will just go into a loop or collapse in a heap."

I'm also quite clueless how a TDT is supposed to decide if it's told there's a CDT in the sim... This looks like a pretty evil decision problem in its own right.

Replies from: orthonormal
comment by orthonormal · 2012-05-23T18:15:00.048Z · LW(p) · GW(p)

Well, the thing is that CDT doesn't completely specify a decision theory. I'm confident now that the specific version of CDT that I described would fail to deduce anything and go with its default, but it's hard to speak for CDTs in general on such a self-referential problem.

comment by AndyCossyleon · 2012-05-23T20:39:08.007Z · LW(p) · GW(p)

Someone may already have mentioned this, but doesn't the fact that these scenarios include self-referencing components bring Goedel's Incompleteness Theorem into play somehow? I.e. As soon as we let decision theories become self-referencing, it is impossible for a "best" decision theory to exist at all.

Replies from: lackofcheese, shokwave
comment by lackofcheese · 2012-06-21T09:54:39.698Z · LW(p) · GW(p)

There was some discussion of much the same point in this comment thread

One important thing to consider is that there may be a sensible way to define "best" that is not susceptible to this type of problem. Most notably, there may be a suitable, solvable, and realistic subclass of problems over which to evaluate performance. Also, even if there is no "best", there can still be better and worse.

comment by shokwave · 2012-05-25T08:30:56.814Z · LW(p) · GW(p)

doesn't the fact that these scenarios include self-referencing components bring Goedel's Incompleteness Theorem into play somehow?

Self-reference and the like is necessary for Goedel sentences but not sufficient. It's certainly plausible that this scenario could have a Goedel sentence, but whether the current problem is isomorphic to a Goedel sentence is not obvious, and seems unlikely.

Replies from: AndyCossyleon
comment by AndyCossyleon · 2012-06-20T19:14:01.978Z · LW(p) · GW(p)

Perhaps referring directly to Goedel was not apt. What Goedel showed was that Hilbert/Russell's efforts were futile. And what Hilbert and Russell were trying to do was create a formal system where actual self-reference was impossible. And the reason he was trying to do that, finally, was that self-reference creates paradoxes which reduce to either incompleteness or inconsistency. And the same is true of these more advanced decision theories. Because they are self-referencing, they create an infinite regress that precludes the existence of a "best" decision theory at all.

So, finding a best decision theory is impossible once self-reference is allowed, because of the nature of self-reference, but not quite because of Goedel's theorems, which are the stronger declaration that any formal system by necessity contains self-referential aspects that make it incomplete or inconsistent.

comment by Stuart_Armstrong · 2012-06-01T11:36:34.155Z · LW(p) · GW(p)

Intuitively this doesn't feel like a 'fair' problem. A UDT agent would ace the TDT formulation and vice versa. Any TDT agent that found a way of distinguishing between 'themselves' and Omega's TDT agent would also ace the problem. It feels like an acausal version of something like:

"I get agents A and B to choose one or two boxes. I then determine the contents of the boxes based on my best guess of A's choice. Surprisingly, B succeeds much better than A at this."

Still an intriguing problem, though.

comment by shokwave · 2012-05-23T07:52:54.894Z · LW(p) · GW(p)

Problems 1 and 2 both look - to me - like fancy versions of the Discrimination problem. edit: I am much less sure of this. That is, Omega changes the world based on whether the agent implements TDT. This bit I am still sure of, but it might be the case that TDT can overcome this anyway.

Discrimination problem: Money Omega puts in room if you're TDT = $1,000. Money Omega puts in room if you're not = $1,001,000.

Problem 1: Money Omega puts in room if you're TDT = $1,000 or $1,001,000. Edit: made a mistake. The error in this problem may be subtler than I first claimed. Money Omega puts in room if you're not = $1,001,000.

Problem 2: $1,000,000 either way. This problem is different but also uninteresting. Due to Omega caring about TDT again, it is just the smallest interesting number paradox for TDT agents only. Other decision theories get a free ride because you're just asking them to reason about an algorithm (easy to show it produces a uniform distribution) and then a maths question (which box has the smallest number on it?).

You claim the rewards are

independent of the method that the agent uses to choose

but they're not. They depend on whether the agent uses TDT to choose or not.

Replies from: aleksiL, drnickbone, cousin_it
comment by aleksiL · 2012-05-23T08:11:35.921Z · LW(p) · GW(p)

Agree. You use process X to determine the setup and agents instantiating X are going to be constrained. Any decision theory would be at a disadvantage when singled out like this.

comment by drnickbone · 2012-05-23T08:51:37.882Z · LW(p) · GW(p)

I've edited the problem statement to clarify Box A slightly. Basically, Omega will put $1001000 in the room ($1000 for box A and $1 million for Box B) regardless of the algorithm run by the actual deciding agent. The contents of the boxes depend only on what the simulated agent decides.

comment by cousin_it · 2012-05-23T08:29:27.978Z · LW(p) · GW(p)

Problem 1: Money Omega puts in room if you're TDT = $1,000 or $1,000,000.

Sorry, shouldn't it be "$1,000 or $1,001,000"?

Replies from: shokwave
comment by shokwave · 2012-05-23T10:06:39.999Z · LW(p) · GW(p)

Right, but $1,001,000 only in the case where you restrict yourself to picking $1,000,000. I oversimplified and it might not actually be accurate.

comment by private_messaging · 2012-05-31T11:28:35.619Z · LW(p) · GW(p)

I think we need a 'non-problematic problems for CDT' thread.

For example, it is not problematic for CDT-based robot controller to have the control values in the action A represent multiple servos in it's world model, as if you wired multiple robot arms to 1 controller in parallel. You may want to do this if you want the robot arms move in unison and pass along the balls in the real world imitation of http://blueballmachine2.ytmnd.com/

It is likewise not problematic if you ran out of wire and decided to make the '1 controller' be physically 2 controllers running identical code from above, or if you ran out of time machines and decided to control yesterday's servo with 1 controller yesterday, and today's servo with same controller in same state today. It's simply low level, irrelevant details.

Mathematical formalization of CDT (such as robot software) will one-box or two-box in newcomb depending to the world model within which CDT decides. If the world model has the 'prediction' as second servo represented by same variable, then it'll one-box.

Philosophical maxims like "act based on consequences of my actions", whenever they one box, or two box, depend in turn solely on philosophical questions like "what is self" . E.g. if "self" means the physical meat, then two-box, if "self" means the algorithm (a higher level concept), then one-box if you assume that the thing in predictor is "self" too.

edit: another thing. Stuff outside robot's senses is naturally uncertain. Upon hearing of the explanation in Newcomb's paradox, one has to update the estimates of what is outside the senses; outside might be that the money are fake, and there's some external logic and wiring and servos that will put real million into a box if you choose to 1-box. If the money are to pay for, I dunno, your child's education, clearly one got to 1-box. I'm pretty sure Causal Deciding General Thud can 1-box just fine, if he needs the money to buy the real weapons for the real army, and suspects that outside his senses there may be the predictor spying. General Thud knows that the best option is to 1-box inside predictor and 2-box outside. The goal is never to two box outside the predictor.

comment by lackofcheese · 2012-05-24T07:51:17.455Z · LW(p) · GW(p)

Let's say that TDT agents can be divided into two categories, TDT-A and TDT-B, based on a single random bit added to their source code in advance. Then TDT-A can take the strategy of always picking the first box in Problem 2, and TDT-B can always pick the second box.

Now, if you're a TDT agent being offered the problem; with the aforementioned strategy, there's a 50% chance that the simulated agent is different than you, netting you $1 million. This also narrows down the advantage of the CDT agent - now they only have a 50% chance of winning the money, which is equal to yours.

Replies from: Manfred, drnickbone
comment by Manfred · 2012-05-24T19:31:23.311Z · LW(p) · GW(p)

Actually, the way the problem is specified, Omega puts the money in box 3.

Replies from: drnickbone
comment by drnickbone · 2012-05-24T19:58:11.832Z · LW(p) · GW(p)

The argument is that the simulation is either TDT-A in this case, or TDT-B. Either way, the simulated agent will pick a single favourite box (1 or 2) with certainty, so the money is in either Box 2 or Box 1,

Though I can see an interpretation which leads to Box 3. Omega simulates a "new-born" TDT (which is neither -A nor -B) and watches as it differentiates itself to one variant or the other, each with equal probability. So the new-born picks boxes 1 and 2 with equal frequency over multiple simulations, and Box 3 contains the money. Is that what you were thinking?

Replies from: Manfred
comment by Manfred · 2012-05-24T20:00:53.385Z · LW(p) · GW(p)

Is that what you were thinking?

Yes. I was thinking that Omega would have access to the agent's source code, and be running the "play against yourself, if you pick a different number than yourself you win" game. Omega is a jerk :D

Replies from: lackofcheese
comment by lackofcheese · 2012-05-24T20:17:25.020Z · LW(p) · GW(p)

If it's your own exact source being simulated, then it's probably impossible to do better than 10%, and the problem isn't interesting anymore.

comment by drnickbone · 2012-05-24T12:23:59.723Z · LW(p) · GW(p)

That's not too bad, actually. One of my ideas while thrashing about here was that an agent should have a "favourite" number in the set {1, 2} and pick that number with certainty. That way, Omega will definitely put the $1 million in Box 1 or Box 2 and each agent will have 50% chance that their favourite number disagrees with the simulated agent's.

This won't work if Omega describes the source-code of the simulation (or otherwise reveals the simulation's favourite number) - since then any agent with that exact code knows it can't choose deterministically, and its best chance is to pick each box with equal chance, as described in the original analysis.

comment by Polymeron · 2012-06-12T17:23:40.861Z · LW(p) · GW(p)

These questions seem decidedly UNfair to me.

No, they don't depend on the agent's decision-making algorithm; just on another agent's specific decision-making algorithm skewing results against an agent with an identical algorithm and letting all others reap the benefits of an otherwise non-advantageous situation.

So, a couple of things:

  1. While I have not mathematically formulated this, I suspect that absolutely any decision theory can have a similar scenario constructed for it, using another agent / simulation with that specific decision theory as the basis for payoff. Go ahead and prove me wrong by supplying one where that's not the case...

  2. It would be far more interesting to see a TDT-defeating question that doesn't have "TDT" (or taboo versions) as part of its phrasing. In general, questions of how a decision theory fares when agents can scan your algorithm and decide to discriminate against that algorithm specifically, are not interesting - because they are losing propositions in any case. When another agent has such profound understanding of how you tick and malice towards that algorithm, you have already lost.

comment by Jonii · 2012-05-23T21:32:40.024Z · LW(p) · GW(p)

Interaction of this simulated TDT and you is so complicated I don't think many of commenters here actually did the math to see how should they expect the simulated TDT agent to react in these situations. I know I didn't. I tried, and failed.

Replies from: cousin_it
comment by cousin_it · 2012-05-24T09:35:39.786Z · LW(p) · GW(p)

Maybe I'm missing something, but the formalization looks easy enough to me...

def tdt_utility():
  if tdt(tdt_utility) == 1:
    box1 = 1000
    box2 = 1000000
  else:
    box1 = 1000
    box2 = 0
  if tdt(tdt_utility) == 1:
    return box2
  else:
    return box1+box2

def your_utility():
  if tdt(tdt_utility) == 1:
    box1 = 1000
    box2 = 1000000
  else:
    box1 = 1000
    box2 = 0
  if you(your_utility) == 1:
    return box2
  else:
    return box1+box2

The functions tdt() and you() accept the source code of a function as an argument, and try to maximize its return value. The implementation of tdt() could be any of our formalizations that enumerate proofs successively, which all return 1 if given the source code to tdt_utility. The implementation of you() could be simply "return 2".

Replies from: BrandonReinhart
comment by BrandonReinhart · 2012-06-04T02:11:56.505Z · LW(p) · GW(p)

Thanks for this. I hadn't seen someone pseudocode this out before. This helps illustrate that interesting problems lie in the scope above (callers to tdt_uility() etc) and below (implementation of tdt() etc).

I wonder if there is a rationality exercise in 'write pseudocode for problem descriptions, explore the callers and implementations'.

comment by shminux · 2012-05-23T20:37:36.077Z · LW(p) · GW(p)

I wonder if there is a mathematician in this forum willing to present the issue in a form of a theorem and a proof for it, in a reasonable mathematical framework. So far all I can see is a bunch of ostensibly plausible informal arguments from different points of view.

Either this problem can be formalized, in which case such a theorem is possible to formulate (whether or not it is possible to prove), or it cannot, in which case it is pointless to argue about it.

Replies from: Vladimir_Nesov, Douglas_Knight
comment by Vladimir_Nesov · 2012-05-23T21:58:45.636Z · LW(p) · GW(p)

Either this problem can be formalized, in which case such a theorem is possible to formulate (whether or not it is possible to prove), or it cannot, in which case it is pointless to argue about it.

Or it's hard to formalize.

Replies from: shminux
comment by shminux · 2012-05-23T22:33:46.472Z · LW(p) · GW(p)

Or it's hard to formalize.

It's pointless to argue about a decision theory problem until it is formalized, since there is no way to check the validity of any argument.

Replies from: TheOtherDave, Vladimir_Nesov
comment by TheOtherDave · 2012-05-23T23:04:28.975Z · LW(p) · GW(p)

So, what ought one do when interested in a problem (decision theory or otherwise) that one does not yet understand well enough to formalize?

I suspect "go do something else until a proper formalization presents itself" is not the best possible answer for all problems, nor is "work silently on formalizing the problem and don't express or defend a position on it until I've succeeded."

Replies from: shminux
comment by shminux · 2012-05-23T23:31:15.536Z · LW(p) · GW(p)

How about "work on formalizing the problem (silently or collaboratively, whatever your style is) and do not defend a position that cannot be successfully defended or refuted"?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-24T00:28:02.157Z · LW(p) · GW(p)

Fair enough.
Is there a clear way to distinguish positions worth arguing without formality (e.g., the one you are arguing here) from those that aren't (e.g., the one you are arguing ought not be argued here)?

Replies from: shminux
comment by shminux · 2012-05-24T01:21:51.211Z · LW(p) · GW(p)

It's a good question. There ought to be, but I am not sure where the dividing line is.

comment by Vladimir_Nesov · 2012-05-23T22:40:50.058Z · LW(p) · GW(p)

You check the arguments using mathematical intuition, and you use them to find better definitions. For example, problems involving continuity or real numbers were fruitfully studied for a very long time before rigorous definitions were found.

Replies from: shminux
comment by shminux · 2012-05-23T23:26:16.823Z · LW(p) · GW(p)

You check them using mathematical intuition, and you use them to find better definitions.

Indeed, you use them to find better definitions, which is the first step in formalizing the problem. If you argue whose answer is right before doing so (as opposed, say, to which answer ought to be right once a proper formalization is found), you succumb to lost purposes.

For example, "TDT ought to always make the best decision in a certain class of problems" is a valid purpose, while "TDT fails on a Newcomb's problem with a TDT-aware predictor" is not a well-defined statement until every part of it is formalized.

[EDIT: I'm baffled by the silent downvote of my pleas for formalization.]

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-24T00:53:32.286Z · LW(p) · GW(p)

[EDIT: I'm baffled by the silent downvote of my pleas for formalization.]

If I had to guess, I'd say that the downvoters interpret those pleas, especially in the context of some of your other comments, as an oblique way of advocating for certain topics of discussion to simply not be mentioned at all.

Admittedly, I interpret them that way myself, so I may just be projecting my beliefs onto others.

Replies from: shminux
comment by shminux · 2012-05-24T01:24:28.170Z · LW(p) · GW(p)

as an oblique way of advocating for certain topics of discussion to simply not be mentioned at all

Wha...? Thank you for letting me know, though I still have no idea what you might mean, I'd greatly appreciate if you elaborate on that!

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-24T04:54:43.243Z · LW(p) · GW(p)

I'm not sure I can add much by elaboration.

My general impression of you(1) is that you consider much of the discussion that takes place here, and much of the thinking of the people who do it, to be kind of a silly waste of time, and that you further see your role here in part as the person who points that fact out to those who for whatever reason have failed to notice it.

Within that context, responding to a comment with a request to formalize it is easy to read as a polite way of expressing "what you just said is uselessly vague. If you are capable of saying something useful, do so, otherwise shut up and leave this subject to the grownups."

And since you aren't consistent about wanting everything to be expressed as a formalism, I assume this is a function of the topic of discussion, because that's the most charitable assumption I can think of.

That said, I reiterate that I have no special knowledge of why you're being downvoted; please don't take me as definitive.

(1) This might be an unfair impression, as I no longer remember what it was that led me to form it.

Replies from: shminux, David_Gerard
comment by shminux · 2012-05-24T14:38:36.211Z · LW(p) · GW(p)

Thank you! I always appreciate candid feedback.

comment by David_Gerard · 2012-05-24T12:27:58.464Z · LW(p) · GW(p)

My general impression of you(1) is that you consider much of the discussion that takes place here, and much of the thinking of the people who do it, to be kind of a silly waste of time, and that you further see your role here in part as the person who points that fact out to those who for whatever reason have failed to notice it.

It's too easy for this to turn into a general counterargument against anything the person says. It may be of benefit to play the ball and not the man.

Replies from: Luke_A_Somers, TheOtherDave
comment by Luke_A_Somers · 2012-05-30T15:49:34.962Z · LW(p) · GW(p)

Anything the person says? In respect to most things it would be a total non-sequitur.

comment by TheOtherDave · 2012-05-24T12:38:19.996Z · LW(p) · GW(p)

Yes, I agree. Perhaps I shouldn't have said anything at all, but, well, he asked.

comment by Douglas_Knight · 2012-05-25T00:52:53.682Z · LW(p) · GW(p)

Which issue/problem? fairness?

Replies from: shminux
comment by shminux · 2012-05-25T17:35:09.946Z · LW(p) · GW(p)

The fairness concept:

the reward is a function of the agent's actual choices in the problem (namely which box or boxes get picked) and independent of the method that the agent uses to choose, or of its choices on any other problems.

should be reasonably easy to formalize, because it does not depend on a full [T]DT algorithm. After that, evaluate the performace of [a]DT under a [b]DT-aware Omega Newcomb's problems, as described in the OP, where 'a' and 'b' are particular DTs, e.g. a=b=T.

comment by Davidmanheim · 2015-03-22T04:07:09.948Z · LW(p) · GW(p)

Why do you assume agents cannot randomize?

comment by [deleted] · 2012-12-28T20:38:33.989Z · LW(p) · GW(p)

1) Not to my knowledge. 2) No, you reasoned TDT's decisions correctly. 3) A TDT agent would not self-modify to CDT, because if it did, its simulation would also self-modify to CDT and then two-box, yielding only $1000 for the real TDT agent. 4) TDT does seem to be a single algorithm, albeit a recursive one in the presense of other TDT agents or simulations. TDT doesn't have to look into its own code, nor does it change its mind upon seeing it, for it decides as if deciding what the code outputs. 5) This is a bit of a tricky one. You could say it's fair if you judge by whether each agent did the best it could have done, rather than getting the most, but a CDT agent could say the same when it two-boxes and reasons it would have gotten $0 if it had one-boxed. I guess in a timeless sense, TDT does the best it could have done in these problems, while CDT doesn't do the best it could have done in newcomb's problem. 6) That's a tough one. If you're asking what omega's intentions are (or would be in the real world), I have no idea. If you're asking who succeeds at the majority of problems in the problem space of anything omega can ask, I strongly believe TDT would outperform CDT on it.

comment by MugaSofer · 2012-12-25T15:50:19.295Z · LW(p) · GW(p)

Are these really "fair" problems? Is there some intelligible sense in which they are not fair, but Newcomb's problem is fair? It certainly looks like Omega may be "rewarding irrationality" (i.e. giving greater gains to someone who runs an inferior decision theory), but that's exactly the argument that CDT theorists use about Newcomb.

In Newcomb's Problem, Omega determines ahead of time what decision theory you use. In these problems, it selects an arbitrary decision theory ahead of time. As such, for any agent using this preselected decision theory, these problems are variations of Newcomb's problem. For any agent using a different decision theory, the problem is quite different (and simpler.) Thus, whatever agent has had it's decision theory preselected can only perform as well as in a standard Newcomb's problem, while a luckier agent may perform better. In other words, there are equivalent problems where Omega bases its decision on the results of a CDT or EDT output, in which they actually perform worse than TDT does in these problems.

comment by pnrjulius · 2012-06-09T00:39:32.255Z · LW(p) · GW(p)

Generalization of Newcomb's Problem: Omega predicts your behavior with accuracy p.

This one could actually be experimentally tested, at least for certain values of p; so for instance we could run undergrads (with $10 and $100 instead of $1,000 and $1,000,000; don't bankrupt the university) and use their behavior from the pilot experiment to predict their behavior in later experiments.

comment by syzygy · 2012-06-05T19:08:54.058Z · LW(p) · GW(p)

Why is the discrimination problem "unfair"? It seems like in any situation where decision theories are actually put into practice, that type of reasoning is likely to be popular. In fact I thought the whole point of advanced decision theories was to deal with that sort of self-referencing reasoning. Am I misunderstanding something?

Replies from: DaFranker
comment by DaFranker · 2012-12-28T21:10:04.101Z · LW(p) · GW(p)

If you are a TDT agent, you don't know whether you're the simulation or the "outside decision", since they're effectively the same. Or rather, the simulation will have made the same choice that you will make.

If you're not a TDT agent, you gain more information: You're not a TDT agent, and the problem states TDT was simulated.

So the discrimination problem functionally resolves to:

If you are a TDT agent, have some dirt. End of story.
If you are not a TDT agent, I have done some mumbo-jumbo, and now you can either take one box for $1000 or $1m, or both of them for $1001000. Have fun! (the mumbo-jumbo has nothing to do with you anyway!)

comment by PhilGoetz · 2012-06-04T04:18:07.794Z · LW(p) · GW(p)

Is the trick with problem 1 that what you are really doing, by using a simulation, is having an agent use timeless decision theory in a context where they can't use timeless decision theory? The simulated agent doesn't know about the external agent. Or, you could say, it's impossible for it to be timeless; the directionality of time (simulation first, external agent moves second) is enforced in a way that makes it impossible for the simulated agent to reason across that time barrier. Therefore it's not fair to call what it decides "timeless decision theory".

comment by loup-vaillant · 2012-05-31T08:49:30.533Z · LW(p) · GW(p)

Either problem 1 and 2 are hitting an infinite regress issue, or I don't see why an ordinary TDT agent wouldn't 2box, and choose the first box, respectively. There's a difference between the following problems:

  • I, Omega, predicted that you would do such and such, and acted accordingly.
  • I, Omega, simulated another agent, and acted accordingly.
  • I, Omega, simulated this very problem, only if you don't run TDT that's not the same problem, but I promise it's the same nonetheless, and acted accordingly

Now, in problem 1 and 2, are the simulated problem and the actual problem actually the same? If they are, I see an infinite regress at Omega's side, and therefore not a problem one would ever encounter. If they aren't, then what I actually understand them to be is:

  1. Omega presents the usual two boxes A and B and announces the following. "Before you entered the room, I ran a simulation of Newcomb's problem as presented to an agent running TDT. If the agent two-boxed then I put nothing in Box B, whereas if the agent one-boxed then I put $1 million in Box B. Regardless of how the simulated agent decided, I put $1000 in Box A. Now please choose your box or boxes."

    Really, You don't have to use something else than TDT to see that the simulated TDT agent one boxed. Its problem isn't your problem. Your precomittment to your problem doesn't affect your precommitment to its problem. Of course, the simulated TDT agent did the right choice by 1 boxing. But you should 2 box.

  2. Omega now presents ten boxes, numbered from 1 to 10, and announces the following. "I ran multiple simulation of the following problem, presented to a TDT agent: “You must take exactly one box. I determined which box you are least likely to take, and put $1million in that box. If there is a tie, I put the money in one of them (the one labelled with the lowest number).” I put the money in the box the simulated TDT agent were least likely to choose. If there was a tie, I put the money in one of them (the one labelled with the lowest number). Now choose your box."

    Same here. You know that the TDT agent put equal probability on every box, to maximize its gains. Again, its problem isn't your problem. Your precomittment to your problem doesn't affect your precommitment to its problem. Of course, the simulated TDT agent did the right choice by choosing at random. But you should take box 1.

Replies from: tut
comment by tut · 2012-06-19T13:55:33.912Z · LW(p) · GW(p)

You don't have to use something else than TDT to see that the simulated TDT agent one boxed. Its problem isn't your problem.

This is CDT reasoning, AKA causal reasoning. Or in other words, how do you not use the same reasoning in the original Newcombe problem?

Replies from: loup-vaillant
comment by loup-vaillant · 2012-06-21T22:16:16.355Z · LW(p) · GW(p)

The reasoning is different because the problem is different.

The simulated agent and yourself were not subjected to the same problem. Therefore you can perfectly precommit to different decisions. TDT does not automatically take the same decisions to problems that merely kinda look the same. They have to actually be the same. There may be specific reasons why TDT would make the same decision, but I doubt it.

Now on to the examples:

###Newcomb's problem

Omega ran a simulation of Newcomb's problem, complete with a TDT agent in it. The simulated TDT agent obviously one boxed, and got the million. If you run TDT yourself, you also know it. Now, Omega tells you of this simulation, and tells you to chose your boxes. This is not Newcomb's problem. If it was, deciding to 2 box would cause box B to be empty!

CDT would crudely assume that 2 boxing gets it $1000 more than 1 boxing. TDT on the other hand knows the simulated box B (and therefore the real one as well) has the million, regardless of its current decision.

###10 boxes problem

Again, the simulated problem and the real one aren't the same. If there were, choosing box 1 with probability 1 would cause box 2 to have the million. Because it's not the same problem, even TDT should be allowed to precommit different decision. The point of TDT is to foresee the consequences of its precommitments. It will therefore know that its precommitment in the real problem doesn't have any influence to its precommitment (and therefore the outcome) in the simulated one. This lack of influence allows it to fall back on CDT reasoning.

Makes sense?

Replies from: lackofcheese, MugaSofer
comment by lackofcheese · 2012-06-22T00:54:50.546Z · LW(p) · GW(p)

The simulated problem and the actual problem don't have to actually be the same - just indistinguishable from the point of view of the agent.

Omega avoids infinite regress because the actual contents of the boxes are irrelevant for the purposes of the simulation, so no sub-simulation is necessary.

Replies from: loup-vaillant
comment by loup-vaillant · 2012-06-22T09:15:47.031Z · LW(p) · GW(p)

Okay. So, what specific mistake TDT does that would prevent it to distinguish the two problems? What does it lead it to think "If I precommit X in problem 1, I have to precommit X in problem 2 as well".

(If the problems aren't the same, of course Omega can avoid infinite regress. And if there is unbounded regress, we may be able to find a non-infinite solution by looping the regress over itself. But then the problems (simulated an real) are definitely the same.)

Replies from: lackofcheese
comment by lackofcheese · 2012-06-22T11:51:57.817Z · LW(p) · GW(p)

In the simulated problem the simulated agent is presented with the choice but never gets the reward; for all it matters both boxes can be empty. This means that Omega doesn't have to do another simulation to work out what's in the simulated boxes.

The infinite regress is resolvable anyway - since each TDT agent is facing the exact same problem, their decisions must be identical, hence TDT one-boxes and Omega knows this.

Replies from: loup-vaillant
comment by loup-vaillant · 2012-06-22T12:20:19.432Z · LW(p) · GW(p)

The infinite regress is resolvable anyway - since each TDT agent is facing the exact same problem, their decisions must be identical, hence TDT one-boxes and Omega knows this.

Of course.

Now there's still the question of the perceived difference between the simulated problem and the real one (I assume here that you should 1 box in the simulation, and 2 box in the real problem). There is a difference, how come TDT does not see it? A Rational Decision Theory would —we humans do. Or if it can see it, how come can't it act on it? RDT could. Do you concede that TDT does and can, or do you still have doubts?

Replies from: lackofcheese
comment by lackofcheese · 2012-06-23T00:19:36.595Z · LW(p) · GW(p)

Due to how the problem is set up, you can't notice the difference until after you've made your decision. The only reason other decision theories know they're not in the simulation is because the problem explicitly states that a TDT agent is simulated, which means it can't be them.

Replies from: loup-vaillant
comment by loup-vaillant · 2012-06-24T20:04:56.830Z · LW(p) · GW(p)

The only reason other decision theories know they're not in the simulation is because the problem explicitly states that a TDT agent is simulated, which means it can't be them.

That's false. Here is a modified version of the problem:

Omega presents the usual two boxes A and B and announces the following. "Before you entered the room, I ran a simulation of Newcomb's problem as presented to you. If your simulated twin 2-boxed then I put nothing in Box B. If your simulated twin 1-boxed, I put $1 million in Box B. In any case, I put $1000 in Box A. Now please 1-box or 2-box."

Even if you're not running TDT, the simulated agent is running the same decision algorithm as you are. If that was the reason why TDT couldn't tell the difference, well, now no one can. However you and I can make the difference. The simulated problem is obviously different:

Omega presents the usual two boxes A and B and announces the following. "I am subjecting you to Newcomb's problem. Now please 1-box or 2-box".

Really, the subjective difference between the two problems should be obvious to any remotely rational agent.

(Please let me know if you agree up until that point. Below, I assume you do.)

I'm pretty sure the correct answers for the two problems (my modified version as well as the original one) are 1-box in the simulation, 2-box in the real problem. (Do you still agree?)

So. We both agree that RDT (Rational Decision Theory) 1-boxes in the simulation, and 2-boxes in the real problem. CDT would 2-box in both, and TDT would 1-box in the simulation while in the real problem it would…

  • 2-box? I think so.
  • 1-box? Supposedly because it can't tell simulation from reality. Or rather, it can't tell the difference between Newcomb's problem and the actual problem. Even though RDT does. (riiight?) So again, I must ask, why not? I need a more specific answer than "due to how the problem is set up". I need you to tell me what specific kind of irrationality TDT is committing here. I need to know its specific blind spot.
Replies from: lackofcheese, APMason
comment by lackofcheese · 2012-06-24T23:04:10.889Z · LW(p) · GW(p)

In your problem, TDT does indeed 2-box, but it's quite a different problem from the original one. Here's the main difference:

I ran a simulation of this problem

vs

I ran a simulation of Newcomb's problem

Replies from: loup-vaillant
comment by loup-vaillant · 2012-06-25T06:41:25.163Z · LW(p) · GW(p)

Oh. So this is indeed a case of "If you're running TDT, I screw you, otherwise you walk free". The way I understand this, TDT one boxes because it should.

If TDT cannot perceive the difference between the original problem and the simulation, this is because it is actually the same problem. For all it knows, it could be in the simulation (the simulation argument would say it is). There is an infinite regress, solved by the fact that all agents in all simulation level will have taken the same decision, because they ran the same decision algorithm. If they all 2-box, they get $1000, while if they all 1-box, they get the million (and no more).

Now, if you replace "an agent running TDT" by "you" (like, a fork of you started from a snapshot taken 3 seconds ago), the correct answer is always to 1-box, because then the problem is equivalent to the actual Newcomb's problem.

comment by APMason · 2012-06-24T20:59:33.848Z · LW(p) · GW(p)

Well, in the problem you present here TDT would 2-box, but you've avoided the hard part of the problem from the OP, in which there is no way to tell whether you're in the simulation or not (or at least there is no way for the simulated you to tell), unless you're running some algorithm other than TDT.

Replies from: loup-vaillant
comment by loup-vaillant · 2012-06-24T21:19:05.734Z · LW(p) · GW(p)

I see no such hard part.

To get back to the exact original problem as stated by the OP, I only need to replace "you" by "an agent running TDT", and "your simulated twin" by "the simulated agent". Do we agree?

Assuming we do agree, are you telling me the hard part is in that change? Are you telling me that TDT would 1-box in the original problem, even though it 2-boxes on my problem?

WHYYYYY?

in which there is no way to tell whether you're in the simulation or not

Wait a minute, what exactly do you mean by "you"? TDT? or "any agent whatsoever"? If it's TDT alone why? If I read you correctly, you already agree that's it's not because Omega said "running TDT" instead of "running WTF-DT". If it's "any agent whatsoever", then are you really sure the simulated and real problem aren't actually the same? (I'm sure they aren't, but, just checking.)

Replies from: APMason
comment by APMason · 2012-06-24T22:08:30.734Z · LW(p) · GW(p)

Wait a minute, what exactly do you mean by "you"? TDT? or "any agent whatsoever"? If it's TDT alone why? If I read you correctly, you already agree that's it's not because Omega said "running TDT" instead of "running WTF-DT". If it's "any agent whatsoever", then are you really sure the simulated and real problem aren't actually the same? (I'm sure they aren't, but, just checking.)

Well, no, this would be my disagreement: it's precisely because Omega told you that the simulated agent is running TDT that only TDT could or could not be the simulation; the simulated and real problem are, for all intents and purposes, identical (Omega doesn't actually need to put a reward in the simulated boxes, because he doesn't need to reward the simulated agent, but both problems appear exactly the same to the simulated and real TDT agents).

Replies from: loup-vaillant
comment by loup-vaillant · 2012-06-25T06:51:30.779Z · LW(p) · GW(p)

This comment from lackofcheese finally made it click. Your comment also make sense.

I now understand that this "problematic" problem just isn't fair. TDT 1-boxes because it's the only way to get the million.

comment by MugaSofer · 2012-12-25T15:53:04.813Z · LW(p) · GW(p)

The simulated agent and yourself were not subjected to the same problem.

Um, yes, they were. That's the whole point.

Replies from: loup-vaillant
comment by loup-vaillant · 2012-12-31T19:40:07.311Z · LW(p) · GW(p)

I'll need to write a full discussion post about that at some point. There is one crucial difference besides "I'm TDT" and "I'm CDT". It's "The simulated agent uses the same decision theory" and "The simulated agent does not use the same decision theory".

That's not exactly the same problem, and I think that is the whole point.

comment by wedrifid · 2012-05-26T02:52:16.750Z · LW(p) · GW(p)

Problem 1: Omega (who experience has shown is always truthful) presents the usual two boxes A and B and announces the following. "Before you entered the room, I ran a simulation of this problem as presented to an agent running TDT. I won't tell you what the agent decided, but I will tell you that if the agent two-boxed then I put nothing in Box B, whereas if the agent one-boxed then I put $1 million in Box B. Regardless of how the simulated agent decided, I put $1000 in Box A. Now please choose your box or boxes."

This is indeed a problem - and one I would describe as the general class "dealing with other agents who are fucking with you." It is not one that can be solved and I believe a "correct" decision theory will, in fact, lose (compared to CDT) in this case.

Note that there seems to be some chance that I am confused in a way analogous to the way that people who believe "Two boxing on Newcomb's is rational" are confused. There could be a deep insight I am missing. This seems comparatively unlikely.

comment by [deleted] · 2012-05-25T20:14:16.494Z · LW(p) · GW(p)

For problem 1, in the language of the blackmail posts, because the tactic omega uses to fill box 2,

TDT-sim.box1,box2=(<F,T> <T,T>) -> Omega.box2=(1M, 0)

depends on TDT-sim's decision, because Omega has already decided, and because Omega didn't make its decision known, a TDT agent presented with this problem is at an epistemic disadvantage relative to Omega: TDT can't react to Omega's actual decision, because it won't know Omega's actual decision until it knows it's own actual decision, at which point TDT can't further react. This epistemic disadvantage doesn't need to be enforced temporally; even if TDT knows Omega's source code, if TDT has limited simulation resources, it might not practically be able to compute Omega's actual decision any way but via Omega's dependence on TDT's decision.

any other agent who is not running TDT ... will be able to re-construct the chain of logic and reason that the simulation one-boxed and so box B contains the $1 million

There aren't other ways for an agent to be at an epistemic disadvantage relative to Omega in this problem than by being TDT? Could you construct an agent which was itself disadvantaged relative to TDT?

Replies from: dlthomas
comment by dlthomas · 2012-05-25T20:33:32.692Z · LW(p) · GW(p)

Could you construct an agent which was itself disadvantaged relative to TDT?

"Take only the box with $1000."

Which itself is inferior to "Take no box."

Replies from: None, None, None
comment by [deleted] · 2012-05-25T21:40:47.588Z · LW(p) · GW(p)

Oh, neat. Agents in "lowest terms", whose definitions don't refer to other agents, can't react to any agent's decision, so they're all at an epistemic disadvantage relative to each other, and to themselves, and to every other agent across all games.

comment by [deleted] · 2012-05-25T21:25:27.982Z · LW(p) · GW(p)

How is agent epistemically inferior to agent ? They're both in "lowest terms" in the sense that their definitions don't make reference to other agents / other facts whose values depend on how environments depend on their values, so they're functionally incapable of reacting to other agents' decisions, and are on equivalent footing.

comment by [deleted] · 2012-05-25T21:16:18.250Z · LW(p) · GW(p)

How is agent epistemically inferior to agent ? They're both constant decisions across all games, both functionally incapable of reacting to any other agent's actual decisions. Even if we broaden the definition of "react" so that constant programs are reacting to other constant programs, your two agents still have equivalent conditonal power / epistemic vantage / reactive footing.

comment by Bill_McGrath · 2012-05-24T11:05:06.061Z · LW(p) · GW(p)

Any agent who is themselves running TDT will reason as in the standard Newcomb problem.

Will they? Surely it's clear that it's now possible to take $1,001,00, because the circumstances are slightly different.

In the standard Newcomb problem, where Omega predicts your behaviour, it's not possible to trick it or act other than its expectation. Here, it is.

Is there some basic part of decision theory I'm not accounting for here?

Replies from: falenas108
comment by falenas108 · 2012-05-24T12:45:28.875Z · LW(p) · GW(p)

Yes. If the TDT agent picked the $1,001,00 here, then the simulated agent would have two-boxed as well, meaning only box A would be filled.

Remember, the simulated agent was presented with the same problem, so the decision TDT makes here is the same one the simulated agent makes.

Replies from: Bill_McGrath
comment by Bill_McGrath · 2012-05-24T13:08:23.949Z · LW(p) · GW(p)

Right, I understand what you mean. I was thinking of in the context of a person being presented with this situation, not an idealized agent running a specific decision theory.

And Omega's simulated agent would presumably hold all the same information as a person would, and be capable of responding the same way.

Cheers for clarifying that for me.

comment by dvasya · 2012-05-23T17:51:57.955Z · LW(p) · GW(p)

In both your problems, the seeming paradox comes from failure to recognize that the two agents (one that Omega has simulated and one making the decision) are facing entirely different prior information. Then, nothing requires them to make identical decisions. The second agent can simulate itself having prior information I1 (that the simulated agent has been facing), then infer Omega's actions, and arrive at the new prior information I2 that is relevant for the decision. And I2 now is independent of which decision the agent would make given I2.

Replies from: drnickbone
comment by drnickbone · 2012-05-23T19:06:52.774Z · LW(p) · GW(p)

Are you sure that they are facing different prior information? If the sim is a good one, then the TDT agent won't be able to tell whether it is the sim or not. However, you are right that one solution could be that there are multiple TDT variants who have different information and so can logically separate their decisions.

I mentioned the problems with that in another response here. The biggest problem is that it seriously undermines the attraction and effectiveness of TDT as a decision theory if different instances of TDT are going to find excuses to separate from each other.

comment by Davorak · 2012-05-23T17:42:51.755Z · LW(p) · GW(p)

Omega (who experience has shown is always truthful) presents the usual two boxes A and B and announces the following. "Before you entered the room, I ran a simulation of this problem as presented to an agent running TDT.

There seems to be a contradiction here. If Omega siad this to me I would either have to believe omega just presented evidence of being untruthful some of the time.

If Omega simulated the problem at hand then in said simulation Omega must have siad: "Before you entered the room, I ran a simulation of this problem as presented to an agent running TDT." In the first simulation the statement is a lie.

Problem 2 has a similar problem.

It is not obvious that the problem can be reformulated to keep Omega constantly truthfully and still have CDT or EDT come out ahead of TDT.

Replies from: drnickbone, cousin_it
comment by drnickbone · 2012-05-23T18:57:44.646Z · LW(p) · GW(p)

Your difficulty seems to be with the parenthesis "(who experience has shown is always truthful)". The relevant experience here is going to be derived from real-world subjects who have been in Omega problems, exactly as is assumed for the standard Newcomb problem. It's not obvious that Omega always tells the truth to its simulations; no-one in the outside world has experience of that.

However you can construe the problem so that Omega doesn't have to lie, even to sims. Omega could always prefix its description of the problem with a little disclaimer "You may be one of my simulations. But if not, then...".

Or Omega could simulate a TDT agent making decisions as if it had just been given the problem description verbally by Omega, without Omega actually doing so. (Whether that's possible or not depends a bit on the simulation).

comment by cousin_it · 2012-05-23T17:57:50.479Z · LW(p) · GW(p)

Omega could truthfully say "the contents of the boxes are exactly as if I'd presented this problem to an agent running TDT".

Replies from: Davorak
comment by Davorak · 2012-05-23T18:41:41.352Z · LW(p) · GW(p)

I do not know if Omega can say that truthfully because I do not know weather the self referential equation representing the problem has a solution.

The problems set out by the OP assumes there is a solution and a particular answer but with out writing out the equation and plugging in his solution to show the solution actually works.

Replies from: cousin_it
comment by cousin_it · 2012-05-23T19:05:49.976Z · LW(p) · GW(p)

There is a solution because Omega can get an answer by simulating TDT, or am I missing something?

Replies from: magfrump
comment by magfrump · 2012-05-24T08:12:07.223Z · LW(p) · GW(p)

It may or may not be proven that TDT settles on answers to questions involving TDT. If TDT doesn't get an answer, then TDT can't get an answer.

Presumably it is true that TDT settles but if it isn't proven, it may not be true; or it could be that the proof (i.e. a formalization of TDT) will provide insight that is currently lacking (such as cutting off after a certain level of resource use; can Omega emulate how many resources the current TDT agent will use? Can the TDT agent commit to using a random number of resources? Do true random-number generators exist? These problems might all be inextricable. Or they might not. I, for one, don't know.)

Replies from: cousin_it
comment by cousin_it · 2012-05-24T09:17:30.553Z · LW(p) · GW(p)

It may or may not be proven that TDT settles on answers to questions involving TDT.

We have several formalizations of UDT that would solve this problem correctly.

Replies from: magfrump
comment by magfrump · 2012-05-24T17:57:48.332Z · LW(p) · GW(p)

Having several formalizations is 90% of a proof, not 100% of a proof. Turn the formalization into a computer program AND either prove that it halts or run this simulation on it in finite time.

I believe that it's true that TDT will get an answer and hence Omega will get an answer, but WHY this is true relies on facts about TDT that I don't know (specifically facts about its implementation; maybe facts about differential topology that game-theoretic equilibrium results rely on.)

Replies from: cousin_it
comment by cousin_it · 2012-05-24T18:27:22.388Z · LW(p) · GW(p)

The linked posts have proofs that the programs halt and return the correct answer. Do you understand the proofs, or could you point out the areas that need more work? Many commenters seemed to understand them...

Replies from: magfrump
comment by magfrump · 2012-05-25T04:14:27.901Z · LW(p) · GW(p)

I do not understand the proofs, primarily because I have not put time in to trying to understand them.

I may have become somewhat defensive in these posts (or withdrawn I guess?) but looking back my original point was really to point out that, naively, asking whether the problem is well-defined is a reasonable question.

The questions in the OP set off alarm bells for me of "this type of question might be a badly-defined type of question" so asking whether these decisions are in the "halting domain" (is there an actual term for that?) of TDT seems like a reasonable question to ask before putting too much thought into other issues.

I believe the answer to be that yes these questions are in the "halting domain" of TDT, but I also believe that understanding what that is and why these questions are legitimate and the proofs that TDT halts will be central to any resolution of these problems.

What I'm really trying to say here is that it makes sense to ask these questions, but I don't understand why, so I think Davorak's question was reasonable, and your answer didn't feel complete to me. Looking back, I don't think I've contributed much to this conversation. Sorry!

comment by nekomata · 2012-05-23T08:19:53.006Z · LW(p) · GW(p)

I don't understand the special role of box 1 in Problem 2. It seems to me that if Omega just makes different choices for the box in which to put the money, all decision theories will say "pick one at random" and will be equal.

In fact, the only reason I can see why Omega picks box 1 seems to be that the "pick at random" process of your TDT is exactly "pick the first one". Just replace it with something dependant on its internal clock (or any parameter not known at the time when Omega asks its question) and the problem disappears.

Replies from: drnickbone
comment by drnickbone · 2012-05-23T09:00:11.548Z · LW(p) · GW(p)

Omega's choice of box depends on its assessment of the simulated agent's choosing probabilities. The tie-breaking rule (if there are several boxes with equal lowest choosing probability, then select the one with the lowest label) is to an extent arbitrary, but it is important that there is some deterministic tie-breaking rule.

I also agree this is entirely a maths problem for Omega or for anyone whose decisions aren't entangled with the problem (with a proof that Box 1 will contain the $1 million). The difficulty is that a TDT agent can't treat it as a straight maths problem which is unlinked to its own decisions.

Replies from: nekomata
comment by nekomata · 2012-05-24T15:19:31.570Z · LW(p) · GW(p)

Why is it important that there is a deterministic breaking rule ? When you would like random numbers, isn't it always better to have a distribution as close as random as possible, even if it is pseudo-random ?

That question is perhaps stupid, I have the impression that I am missing something important...

Replies from: drnickbone
comment by drnickbone · 2012-05-25T11:31:36.791Z · LW(p) · GW(p)

Remember it is Omega implementing the tie-breaker rule, since it defines the problem.

The consequence of the tie-breaker is that the choosing agent knows that Omega's box-choice was a simple deterministic function of a mathematical calculation (or a proof). So the agent's uncertainty about which box contains the money is pure logical uncertainty.

Replies from: nekomata
comment by nekomata · 2012-05-25T12:03:49.057Z · LW(p) · GW(p)

Whoops... I can't believe I missed that. You are obviously right.

comment by private_messaging · 2012-05-25T22:04:12.544Z · LW(p) · GW(p)

There was this Rocko thing a while back (which is not supposed to be discussed), where if I understood that nonsense correctly, the idea was that the decision theories here would do equivalent to one-boxing on Newcomb with transparent boxes where you could see there is no million, when there's no million. (and where the boxes were made and sealed before you were born). It's not easy to one-box rationally.

Also in practice usually being simulated correctly is awesome for getting scammed (agents tend to face adversaries rather than crazed beneficiaries).