Posts

Comments

Comment by dankane on A taxonomy of Oracle AIs · 2017-09-30T07:44:41.708Z · LW · GW

I feel like your discussion of predictors makes a few not-necessarily-warranted assumptions about how the predictor deals with self-reference. Then again, I guess anything that doesn't do this fails as a predictor in a wide range of useful cases. It predicts a massive fire will kill 100 people, and so naturally this prediction is used to invalidate the original prediction.

But there is a simple-ish fix. What if you simply ask it to make predictions about what would happen if it (and say all similar predictors) suddenly stopped functioning immediately before this prediction was returned?

Comment by dankane on Markets are Anti-Inductive · 2017-09-14T16:49:03.713Z · LW · GW

Unless you can explain to me how prediction markets are going to break the pattern that two different shares of the same stock have correlated prices.

I'm actually not sure how prediction markets are supposed to have an effect on this issue. My issue is not that people have too much difficulty recognizing patterns. My issue is that some patterns once recognized do not provide incentives to make that pattern disappear. Unless you can tell me how prediction markets might fix this problem, your response seems like a bit of a non-sequitur.

Comment by dankane on Markets are Anti-Inductive · 2017-09-06T01:16:06.988Z · LW · GW

This seems like too general a principle. I agree that in many circumstances, public knowledge of a pattern in pricing will lead to effects causing that pattern to disappear. However, it is not clear to me that this is always to case, or that the size of the effect will be sufficient to complete cancel out the original observation.

For example, I observe that two different units of Google stock have prices that are highly correlated with each other. I doubt that this observation will cause separate markets to spring up giving wildly divergent prices to different shares of the same stock. I also note that stock prices are always non-negative. I also doubt that this will cease to be the case any time soon.

Although these are somewhat tautological, one can imagine non-tautological observations that will not disappear. If stocks A and B are known to be highly correlated, this may well lead to a larger gap as hedge funds who predict a small difference in expected returns will buy one and short the other. However, if they are correlated for structural reasons part of this might be that it is hard to detect effects that will cause their prices to diverge significantly, so the observation of the effect will likely not be enough to actually remove all of the correlation.

One can also imagine general observations about the market itself, like the approximate frequency of crashes, or log normality of price changes that might not disappear simply because they are known. In order for an effect to disappear there needs to be a way to make a profit off of it.

Comment by dankane on The Strangest Thing An AI Could Tell You · 2016-05-23T07:49:32.268Z · LW · GW

We probably couldn't even talk ourselves out of this box.

I don't know... That sounds a lot like what an AI trying to talk itself out of a box would say.

Comment by dankane on The Mystery of the Haunted Rationalist · 2015-11-19T08:59:24.814Z · LW · GW

Hmm... I would probably explain the threshold for staying in the house not as an implicit expected probability computation, but an evaluation of the price of the discomfort associated with staying in a location that you find spooky. At least for me, I think that the part of my mind that knows that ghosts do not exist would have no trouble controlling whether or not I remain in the house or not. However, it might well decide that it is not worth the $10 that I would receive to spend the entire night in a place where some other piece of my mind is constantly yelling at me to run away screaming.

Comment by dankane on An overall schema for the friendly AI problems: self-referential convergence criteria · 2015-07-14T16:55:42.108Z · LW · GW

It's just that such self-referential criteria as reflective equilibrium are a necessary condition

Why? The only example of adequately friendly intelligent systems that we have (i.e. us) don't meet this condition. Why should reflective equilibrium be a necessary condition for FAI?

Comment by dankane on Taking Effective Altruism Seriously · 2015-06-08T06:20:21.849Z · LW · GW

That may be true (at least to the degree to which it is sensible to assign a specific cause to a given util). However, it is not very good evidence that investment in first world economies is the most effective way to generate utils in Africa.

Comment by dankane on Taking Effective Altruism Seriously · 2015-06-05T23:24:53.677Z · LW · GW

OK. So suppose that I grant your claim that donations to sub-Saharan Africa will not substantially affect the size of the future economic pie, but that other investments will. I claim that there may still be reason to donate there.

I grant that such a donation will produce fewer dollars of value than investing in capitol infrastructure. On the other hand dollars is not the objective, utils are. We can reasonably assume that marginal utility of an extra dollar for a given person is decreasing as that person's wealth increases. We can reasonably expect that world GDP per capita will be much higher in 100 years, and know that GDP per capita is much higher in the US than in sub-Saharan Africa. Thus, even if an investment in first-world infrastructure produces more total dollars of value, these dollars are going to much wealthier people than dollars donated to people today in sub-Saharan Africa, and thus might well produce fewer total utils.

Comment by dankane on The Truly Iterated Prisoner's Dilemma · 2015-04-04T06:40:53.835Z · LW · GW

[I realize that I missed the train and probably very few people will read this, but here goes]

So in non-iterated prisoner's dilemma, defect is a dominant strategy. No matter what the opponent is doing, defecting will always give you the best possible outcome. In iterated prisoner's dilemma, there is no longer a dominant strategy. If my opponent is playing Tit-for-Tat, I get the best outcome by cooperating in all rounds but the last. If my opponent ignores what I do, I get the best outcome by always defecting. It is true that all defects is the unique Nash equilibrium strategy, but this is a much weaker reason for playing it, especially given that evidence shows that when playing among people who are trying to win, Tit-for-Tat tends to achieve much better outcomes.

There seems to be a lot of discussion in the comments about this or that being the rational thing to do, and I think that this is a big problem that gets in the way of clear thinking about the issue. The problem is that people are using the word "rational" here without having a clear idea as to what exactly that means. Sure, it's the thing that wins, but wins when? Provably, there is no single strategy that achieves the best possible outcome against all possible implementations of Clippy. So what do you mean? Are you trying to optimize your expected utility under a Kolmogorov prior? If so how come nobody seems to be trying to do computations of the posterior distribution? Or discussing exactly what side data we know about the issue that might inform this probability computation? Or even wondering which universal Turing machine we are using to define our prior? Unless you want to give a more concrete definition of what you mean by "rational" in this context, perhaps you should stop arguing for a moment about what the rational thing to do is.

Comment by dankane on An Introduction to Löb's Theorem in MIRI Research · 2015-03-26T18:46:28.272Z · LW · GW

I think that the way that humans predict other humans is the wrong way to look at this, and instead consider how humans would reason about the behavior of an AI that they build. I'm not proposing simply "don't use formal systems", or even "don't limit yourself exclusively to a single formal system". I am actually alluding to a far more specific procedure:

  • Come up with a small set of basic assumptions (axioms)
  • Convince yourself that these assumptions accurately describe the system at hand
  • Try to prove that the axioms would imply the desired behavior
  • If you cannot do this return for the first step and see if additional assumptions are necessary

Now it turns out that for almost any mathematical problem that we are actually interested in, ZFC is going to be a sufficient set of assumptions, so the first few steps here are somewhat invisible, but they are still there. Somebody need need to come up with these axioms for the first time, and each individual who wants to use them should convince themselves that they are reasonable before relying on them.

A good AI should already do this to some degree. It needs to come up with models of a system that it is interacting with before determining its course of action. It is obvious that it might need to update what assumptions it's using the model physical laws, why shouldn't it just do the same thing for logical ones?

Comment by dankane on An Introduction to Löb's Theorem in MIRI Research · 2015-03-26T05:16:49.931Z · LW · GW

Yes, obviously. We solve the Lobstacle by not ourselves running on formal systems and sometimes accepting axioms that we were not born with (things like PA). Allowing the AI to only do things that it can prove will have good consequences using a specific formal system would make it dumber than us.

Comment by dankane on An Introduction to Löb's Theorem in MIRI Research · 2015-03-26T04:46:41.640Z · LW · GW

Actually, why is it that when the Lobian obstacle is discussed that it seem to always be in reference to an AI trying to determine if a successor AI is safe, and not an AI trying to determine whether or not it, itself, is safe?

Comment by dankane on An Introduction to Löb's Theorem in MIRI Research · 2015-03-26T04:11:59.527Z · LW · GW

Question: If we do manage to build a strong AI, why not just let it figure this problem out on its own when trying to construct a successor? Almost definitionally, it will do a better job of it than we will.

Comment by dankane on Newcomblike problems are the norm · 2014-09-25T17:59:48.705Z · LW · GW

Relatedly, with your interview example, I think that perhaps a better model is that whether a person is confident or shy is not depending on whether they believe that they will be bold or not, but upon the degree to which they care about being laughed at. If you are confident, you don't care about being laughed at and might as well be bold. If you are afraid of being laughed at, you already know that you are shy and thus do not gain anything by being bold.

Comment by dankane on Newcomblike problems are the norm · 2014-09-25T17:44:12.784Z · LW · GW

I think my bigger point is that you don't seem to make any real argument as to which case we are in. For example, consider the following model of how people's perception of my trustworthiness might be correlated to my actual trustworthiness: There are two causal chains: My values -> Things I say -> Peoples' perceptions My values -> My actions So if I value trustworthiness, I will not, for example talk much about wanting to avoid being sucker (in contexts where it would refer to be doing trustworthy things). This will influence peoples' perceptions of whether or not I am trustworthy. Furthermore, if I do value trustworthiness, I will want to be trustworthy.

This setup makes things look very much like the smoking lesion problem. A CDT agent that values trustworthiness will be trustworthy because they place intrinsic value in it. A CDT agent that does not value trustworthiness will be perceived as being untrustworthy. Simply changing their actions will not alter this perception, and therefore they will fail to be trustworthy in situations where it benefits them, and this is the correct decision.

Now you might try to break the causal link: My values -> Things that I say And doing so is certainly possible (I mean you can have spies that successfully pretend to be loyal for extended periods without giving themselves away). On the other hand, it might not happen often for several possible reasons: A) Maintaining a facade at all times is exhausting (and thus imposes high costs) B) Lying consistently is hard (as in too computationally expensive) C) The right way to lie consistently, is to simulate the altered value set, but this may actually lead to changing your values (standard advice for become more confident is pretending to be confident, right?).

So yes, in this model an non-trust-valuing and self-modifying CDT agent will self-modify, but it will need to self-modify its values rather than its decision theory. Using a decision theory that is trustworthy despite not intrinsically valuing it doesn't help.

Comment by dankane on Newcomblike problems are the norm · 2014-09-25T16:20:34.976Z · LW · GW

Newcomblike problems occur whenever knowledge about what decision you will make leaks into the environment. The knowledge doesn't have to be 100% accurate, it just has to be correlated with your eventual actual action.

This is far too general. The way in which information is leaking into the environment is what separates Newcomb's problem from the smoking lesion problem. For your argument to work you need to argue that whatever signals are being picked up on would change if the subject changed their disposition, not merely that these signals are correlated with the disposition.

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-19T01:37:26.589Z · LW · GW

Sorry. I'm not quite sure what you're saying here. Though, I did ask for a specific example, which I am pretty sure is not contained here.

Though to clarify, by "reading your mind" I refer to any situation in which the scenario you face (including the given description of that scenario) depends directly on which program you are running and not merely upon what that program outputs.

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-19T00:59:57.332Z · LW · GW

Well, yes. Then again, the game was specified as PD against BOT^CDT not as PD against BOT^{you}. It seems pretty clear that for X not equal to CDT that it is not the case that X could achieve the result CC in this game. Are you saying that it is reasonable to say that CDT could achieve a result that no other strategy could just because it's code happens to appear in the opponent's program?

I think that there is perhaps a distinction to be made between things that happen to be simulating your code and this that are causally simulating your code.

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-19T00:42:57.088Z · LW · GW

OK. Fine. Point taken. There is a simple fix though.

MBOT^X(Y) = X'(MBOT^X) where X' is X but with randomized irrelevant experiences.

In order to produce this properly, MBOT only needs to have your prior (or a sufficiently similar probability distribution) over irrelevant experiences hardcoded. And while your actual experiences might be complicated and hard to predict, your priors are not.

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-18T14:50:13.908Z · LW · GW

No. BOT^CDT = DefectBot. It defects against any opponent. CDT could not cause it to cooperate by changing what it does.

If it cooperated, it would get CC instead of DD.

Actually if CDT cooperated against BOT^CDT it would get $3^^^3. You can prove all sorts of wonderful things once you assume a statement that is false.

Depending on the exact setup, "irrelevant details in memory" are actually vital information that allow you to distinguish whether you are "actually playing" or are being simulated in BOT's mind.

OK... So UDT^Red and UDT^Blue are two instantiations of UDT that differ only in irrelevant details. In fact the scenario is a mirror matchup, only after instantiation one of the copies was painted red and the other was painted blue. According to what you seem to be saying UDT^Red will reason:

Well I can map different epistemic states to different outputs, I can implement the strategy cooperate if you are painted blue and defect if you are painted red.

Of course UDT^Blue will reason the same way and they will fail to cooperate with each other.

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-18T07:54:58.327Z · LW · GW

It's hard to see how this doesn't count as "reading your mind".

So... UDT's source code is some mathematical constant, say 1893463. It turns out that UDT does worse against BOT^1893463. Note that it does worse against BOT^1893463 not BOT^{you}. The universe does not depend on the source code of the person playing the game (as it does in mirror PD). Furthermore, UDT does not control the output of its environment. BOT^1893463 always cooperates. It cooperates against UDT. It cooperates against CDT. It cooperates everything.

But this isn't due to any intrinsic advantage of CDT's algorithm. It's just because they happen to be numerically inequivalent.

No. CDT does at least as well as UDT against BOT^CDT. UDT does worse when there is this numerical equivalence, but CDT does not suffer from this issue. CDT does at least as well as UDT against BOT^X for all X, and sometimes does better. In fact, if you only construct counterfactuals this way, CDT does at least as well as anything else.

An instance of UDT with literally any other epistemic state than the one contained in BOT would do just as well as CDT here.

This is silly. A UDT that believes that it is in a mirror matchup also loses. A UDT that believes it is facing Newcomb's problem does something incoherent. If you are claiming that you want a UDT that differs from the encoding in BOT because, of some irrelevant details in its memory... well then it might depend upon implementation, but I think that most attempted implementations of UDT would conclude that these irrelevant details are irrelevant and cooperate anyway. If you don't believe this then you should also think that UDT will defect in a mirror matchup if it and its clone are painted different colors.

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-18T06:58:37.294Z · LW · GW

Actually, I think that you are misunderstanding me. UDT's current epistemic state (at the start of the game) is encoded into BOT^UDT. No mind reading involved. Just a coincidence. [Really, your current epistemic state is part of your program]

Your argument is like saying that UDT usually gets $1001000 in Newcomb's problem because whether or not the box was full depended on whether or not UDT one-boxed when in a different epistemic state.

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-18T05:39:06.371Z · LW · GW

OK. Let me say this another way that involves more equations.

So let's let U(X,Y) be the utility that X gets when it plays prisoner's dilemma against Y. For a program X, let BOT^X be the program where BOT^X(Y) = X(BOT^X). Notice that BOT^X(Y) does not depend on Y. Therefore, depending upon what X is BOT^X is either equivalent CooperateBot or equivalent to DefectBot.

Now, you are claiming that UDT plays optimally against BOT_UDT because for any strategy X U(X, BOT^X) <= U(UDT, BOT^UDT) This is true, because X(BOT^X) = BOT^X(X) by the definition of BOT^X. Therefore you cannot do better than CC. On the other hand, it is also true that for any X and any Y that U(X,BOT^Y) <= U(CDT, BOT^Y) This is because BOT^Y's behavior does not depend on X, and therefore you do optimally by defecting against it (or you could just apply the Theorem that says that CDT wins if the universe cannot read your mind).

Our disagreement here stems from the fact that we are considering different counterfactuals here. You seem to claim that UDT behaves correctly because U(UDT,BOT^UDT) > U(CDT,BOT^CDT) While I claim that CDT does because U(CDT, BOT^UDT) > U(UDT, BOT^UDT)

And in fact, given the way that I phrased the scenario, (which was that you play BOT^UDT not that you play BOT^{you} (i.e. the mirror matchup)) I happen to be right here. So justify it however you like, but UDT does lose this scenario.

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-18T05:19:16.888Z · LW · GW

No. BOT(X) is cooperate for all X. It behaves in exactly the same way that CooperateBot does, it just runs different though equivalent code.

And my point was that CDT does better against BOT than UDT does. I was asked for an example where CDT does better than UDT where the universe cannot read your mind except via through your actions in counterfactuals. This is an example of such. In fact, in this example, the universe doesn't read your mind at all.

Also your argument that UDT cannot possibly do better against BOT than it does in analogous to the argument that CDT cannot do better in the mirror matchup than it does. Namely that CDT's outcome against CDT is at least as good as anything else's outcome against CDT. You aren't defining your counterfactuals correctly. You can do better against BOT than UDT does. You just have to not be UDT.

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-18T03:44:57.075Z · LW · GW

It's not UDT. It's the strategy that against any opponent does what UDT would do against it. In particular, it cooperates against any opponent. Therefore it is CooperateBot. It is just coded in a funny way.

To be clear letting Y(X) be what Y does against X we have that BOT(X) = UDT(BOT) = C This is different from UDT. UDT(X) is D for some values of X. The two functions agree when X=UDT and in relatively few other cases.

Comment by dankane on Simulate and Defer To More Rational Selves · 2014-09-17T19:14:43.984Z · LW · GW

I think you mean that rational agents cannot be successfully blackmailed by others agents that for which it is common knowledge that the other agents can simulate them accurately and will only use blackmail if they predict it to be successful. All of this of course in the absence of mitigating circumstances (including for example the theoretical likelihood of other agents that reward you for counterfactualy giving into blackmail under these circumstances).

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-17T17:15:03.351Z · LW · GW

I suppose. On the other hand, is that because other people can read your mind or because you have emotional responses that you cannot suppress and are correlated to what you are thinking? This is actually critical to what counterfactuals you want to construct.

Consider for example the terrorist who would try to bring down an airplane that he is on given the opportunity. Unfortunately, he's an open book and airport security would figure out that he's up to something and prevent him from flying. This is actually inconvenient since it also means he can't use air travel. He would like to be able to precommit to not trying to take down particular flights so that he would be allowed on. On the other hand, whether or not this would work depends on what exactly airport security is picking up on. Are they actually able to discern his intent to cause harm, or are they merely picking up on his nervousness at being questioned by airport security. If it's the latter, would an internal precommitment to not bring down a particular flight actually solve his problem?

Put another way, is the TSA detecting the fact that the terrorist would down the plane if given the opportunity, or simply that he would like to do so (in the sense of getting extra utils from doing so).

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-17T16:18:33.794Z · LW · GW

I'm sure we could think of some

OK. Name one.

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-17T15:46:45.341Z · LW · GW

Fine. Your opponent actually simulates what UDT would do if Omega had told it that and returns the appropriate response (i.e. it is CooperateBot, although perhaps your finite prover is unable to verify that).

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-17T07:51:23.141Z · LW · GW

Actually, this is a somewhat general phenomenon. Consider for example, the version of Newcomb's problem where the box is full "if and only if UDT one-boxes in this scenario".

UDT's optimality theorem requires the in the counterfactual where it is replaced by a different decision theory that all of the "you"'s referenced in the scenario remain "you" rather than "UDT". In the latter counterfactual CDT provably wins. The fact that UDT wins these scenarios is an artifact of how you are constructing your scenarios.

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-17T07:32:52.719Z · LW · GW

Or how about this example, that simplifies things even further. The game is PD against CooperateBot, BUT before the game starts Omega announces "your opponent will make the same decision that UDT would if I told them this." This announcement causes UDT to cooperate against CooperateBot. CDT on the other hand, correctly deduces that the opponent will cooperate no matter what it does (actually UDT comes to this conclusion too) and therefore decides to defect.

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-17T06:47:18.616Z · LW · GW

The CDT agents here are equivalent to DefectBot

And the UDT agents are equivalent to CooperateBot. What's your point?

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-17T06:45:51.801Z · LW · GW

The CDT agents here win because they do not believe that altering their strategy will change the way that their opponents behave. This is actually true in this case, and even true for the UDT agents depending on how you choose to construct your counterfactuals. If a UDT agent suffered a malfunction and defected, it too would do better. In any case, the theorem that UDT agents perform optimally in universes that can only read your mind by knowing what you would do in hypothetical situations is false as this example shows.

UDT bots win in some scenarios where the initial conditions of the scenario favor agents that behave sub-optimally in certain scenarios (and by sub-optimally, I mean where counterfactuals are constructed in the way implicit to CDT). The example above shows that sometimes they are punished for acting suboptimally.

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-17T04:12:29.967Z · LW · GW

Actually thinking about it this way, I have seen the light. CDT makes the faulty assumption that your initial state in uncorrelated with the universe that you find yourself in (who knows, you might wake up in the middle of Newcomb's problem and find that whether or not you get $1000000 depends on whether or not your code is such that you would one-box in Newcomb's problem). UDT goes some ways to correct this issue, but it doesn't go far enough.

I would like to propose a new, more optimal decision theory. Call it ADT for Anthropic Decision Theory. Actually, it depends on a prior, so assume that you've picked out one of those. Given your prior, ADT is the decision theory D that maximizes the expected (given your prior) lifetime utility of all agents using D as their decision theory. Note how agents using ADT do provably better than agents using any other decision theory.

Note that I have absolutely no idea what ADT does in, well, any situation, but that shouldn't stop you from adopting it. It is optimal after all.

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-17T03:34:32.811Z · LW · GW

Basically this, except there's no need to actually do it beforehand.

Actually, no. To implement things correctly, UDT needs to determine its entire strategy all at once. It cannot decide whether to one-box or two-box in Newcomb just by considering the Newcomb that it is currently dealing with. It must also consider all possible hypothetical scenarios where any other agent's action depends on whether or not UDT one-boxes.

Furthermore, UDT cannot decide what it does in Newcomb independently of what it does in the Counterfactual Mugging, because some hypothetical entity might give it rewards based on some combination of the two behaviors. UDT needs to compute its entire strategy (i.e. it's response to all possible scenarios) all at the same time before it can determine what it should do in any particular situation [OK. Not quite true. It might be able to prove that whatever the optimal strategy is it involves doing X in situation Y without actually determining the optimal strategy. Then again, this seems really hard since doing almost anything directly from Kolmogorov priors is basically impossible].

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-17T03:25:42.690Z · LW · GW

Well, perhaps. I think that the bigger problem is that under reasonable priors P(Newcomb) and P(anti-Newcomb) are both so incredibly small that I would have trouble finding a meaningful way to approximate their ratio.

How confident are you that UDT actually one-boxes?

Also yeah, if you want a better scenario where UDT loses see my PD against 99% prob. UDT and 1% prob. CDT example.

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-17T03:21:31.759Z · LW · GW

CDT does not avoid this issue by "setting its priors to the delta function". CDT deals with this issue by being a theory where your course of action only depends on your posterior distribution. You can base your actions only on what the universe actually looks like rather than having to pay attention to all possible universes. Given that it's basically impossible to determine anything about what Kolmogorov priors actually say, being able to totally ignore parts of probability space that you have ruled out is a big deal.

... And this whole issue with not being able to self-modify beforehand. This only matters if your initial code affects the rest of the universe. To be more precise, this is only an issue if the problem is phrased in such a way that the universe you have to deal with depends on the code you are running. If we instantiate the Newcomb's problem in the middle of the decision, UDT faces a world with the first box full while CDT faces a world with the first box empty. UDT wins because the scenario is in its favor before you even start the game.

If you really think that this is a big deal, you should try to figure out which decision theories are only created by universes that want to be nice to them and try using one of those.

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-16T22:45:11.396Z · LW · GW

I guess my point is that it is nonsensical to ask "what does UDT do in situation X" without also specifying the prior over possible universes that this particular UDT is using. Given that this is the case, what exactly do you mean by "losing game X"?

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-16T19:48:42.818Z · LW · GW

Actually, here's a better counter-example, one that actually exemplifies some of the claims of CDT optimality. Suppose that the universe consists of a bunch of agents (who do not know each others' identities) playing one-off PDs against each other. Now 99% of these agents are UDT agents and 1% are CDT agents.

The CDT agents defect for the standard reason. The UDT agents reason that my opponent will do the same thing that I do with 99% probability, therefore, I should cooperate.

CDT agents get 99% DC and 1% DD. UDT agents get 99% CC and 1% CD. The CDT agents in this universe do better than the UDT agents, yet they are facing a perfectly symmetrical scenario with no mind reading involved.

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-16T19:18:41.554Z · LW · GW

I think some that favor CDT would claim that you are are phrasing the counterfactual incorrectly. You are phrasing the situation as "you are playing against a copy of yourself" rather than "you are playing against an agent running code X (which just happens to be the same as yours) and thinks you are also running code X". If X=CDT, then TDT and CDT each achieve the result DD. If X=TDT, then TDT achieves CC, but CDT achieves DC.

In other words TDT does beat CDT in the self matchup. But one could argue that self matchup against TDT and self matchup against CDT are different scenarios, and thus should not be compared.

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-16T19:08:38.617Z · LW · GW

The fact that Newcomblike problems are fairly common in the real world is one facet of that motivation.

I disagree. CDT correctly solves all problems in which other agents cannot read your mind. Real world occurrences of mind reading are actually uncommon.

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-16T19:02:57.539Z · LW · GW

There's a difference between reasoning about your mind and actually reading your mind. CDT certainly faces situations in which it is advantageous to convince others that it does not follow CDT. On the other hand, this is simply behaving in a way that leads to the desired outcome. This is different from facing situations where you can only convince people of this by actually self-modifying. Those situations only occur when other people can actually read your mind.

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-16T16:51:51.343Z · LW · GW

Actually, I take it back. Depending on how you define things, UDT can still lose. Consider the following game:

I will clone you. One of the clones I paint red and the other I paint blue. The red clone I give $1000000 and the blue clone I fine $1000000. UDT clearly gets expectation 0 out of this. SMCDT however can replace its code with the following: If you are painted blue: wipe your hard drive If you are painted red: change your code back to standard SMCDT

Thus, SMCDT never actually has to play blue in this game, while UDT does.

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-16T16:11:27.221Z · LW · GW

OK. Fine. I will grant you this:

UDT is provably optimal if it has correct priors over possible universes and the universe can read its mind only through determining its behavior in hypothetical situations (because UDT basically is just find the behavior pattern that optimizes expected utility and implement that).

On the other hand, SMCDT is provably optimal in situations where it has an accurate posterior probability distribution, and where the universe can read its mind but not its initial state (because it just instantly self-modifies to the optimally performing program).

I don't see why the former set of restrictions is any more reasonable than the latter, and at least for SMCDT you can figure out what it would do in a given situation without first specifying a prior over possible universes.

I'm also not convinced that it is even worth spending so much effort trying to decide the optimal decision theory in situations where the universe can read your mind. This is not a realistic model to begin with.

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-16T14:56:24.644Z · LW · GW

Which is actually one of the annoying things about UDT. Your strategy cannot depend simply on your posterior probability distribution, it has to depend on your prior probability distribution. How you even in practice determine your priors for Newcomb vs. anti-Newcomb is really beyond me.

But in any case, assuming that one is more common, UDT does lose this game.

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-16T05:22:03.503Z · LW · GW

Yes. And likewise if you put an unconditional extortion-refuser in an environment populated by unconditional extortionists.

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-16T03:13:52.126Z · LW · GW

Fine. How about this: "Have $1000 if you would have two-boxed in Newcomb's problem."

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-16T02:11:07.143Z · LW · GW

Only if the adversary makes its decision to attempt extortion regardless of the probability of success.

And thereby the extortioner's optimal strategy is to extort independently of the probably of success. Actually, this is probably true is a lot of real cases (say ransomware) where the extortioner cannot actually ascertain the probably of success ahead of time.

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-16T02:08:00.209Z · LW · GW

Well, if the universe cannot read your source code, both agents are identical and provably optimal. If the universe can read your source code, there are easy scenarios where one or the other does better. For example,

"Here have $1000 if you are a CDT agent" Or "Here have $1000 if you are a UDT agent"

Comment by dankane on Causal decision theory is unsatisfactory · 2014-09-16T01:43:42.795Z · LW · GW

Eliezer thinks his TDT will refuse to give in to blackmail, because outputting another answer would encourage other rational agents to blackmail it.

This just means that TDT loses in honest one-off blackmail situations (in reality, you don't give in to blackmail because it will cause other people to blackmail you whether or not you then self-modify to never give into blackmail again). TDT only does better if the potential blackmailers read your code in order to decide whether or not blackmail will be effective (and then only if your priors say that such blackmailers are more likely than anti-blackmailers who give you money if they think you would have given into blackmail). Then again, if the blackmailers think that you might be a TDT agent, they just need to precommit to using blackmail whether or not they believe that it will be effective.

Actually, this suggests that blackmail is a game that TDT agents really lose badly at when playing against each other. The TDT blackmailer will decide to blackmail regardless of effectiveness and the TDT blackmailee will decide to ignore the blackmail, thus ending in the worst possible outcome.