# Critiquing Scasper's Definition of Subjunctive Dependence

post by Heighn · 2022-01-10T16:22:56.355Z · LW · GW · 1 commentsBefore we get to the main part, a note: this post discusses subjunctive dependence, and assumes the reader has a background in Functional Decision Theory (FDT). For readers who don't, I recommend this paper by Eliezer Yudkowsky and Nate Soares. I will leave their definition of subjunctive dependence here, because it is so central to this post:

When two physical systems are computing the same function, we will say that their behaviors “subjunctively depend” upon that function.

So, I just finished reading Dissolving Confusion around Functional Decision Theory [LW · GW] by Stephen Casper (scasper [LW · GW]). In it, Casper explains FDT quite well, and makes some good points, such as FDT not assuming causation can happen backwards in time. However, Casper makes a claim about subjunctive dependence that's not only wrong, but might add to confusion around FDT:

Suppose that you design some agent who enters an environment with whatever source code you gave it. Then if the agent’s source code is fixed, a predictor could exploit certain statistical correlations without knowing the source code. For example, suppose the predictor used observations of the agent to make probabilistic inferences about its source code. These could even be observations about how the agent acts in other Newcombian situations. Then the predictor could, without knowing what function the agent computes, make better-than-random guesses about its behavior. This falls outside of Yudkowsky and Soares’ definition of subjunctive dependence, but it has the same effect.

To see where Casper goes wrong, let's look at a clear example. The classic Newcomb's problem [? · GW] will do, but now, Omega isn't running a model of your decision procedure, but has observed your one-box/two-box choices on 100 earlier instances of the game - and uses the percentage of times you one-boxed for her prediction. That is, Omega predicts you one-box iff that percentage is greater than 50. Now, in the version where Omega *is *running a model of your decision procedure, you and Omega's model are subjunctively dependent on the same function. In our current version, this isn't the case, as Omega isn't running such a model; however, Omega's prediction is based on observations that are causally influenced by your historic choices, made by historic versions of you. Crucially, "current you" and every "historic you" therefore *subjunctively depend on your decision procedure*. The FDT graph for this version of Newcomb's problem looks like this:

Let's ask FDT's question: “Which output of this decision procedure causes the best outcome?". If it's two-boxing, your decision procedure causes each historical instance of you to two-box: this causes Omega to predict you two-box. Your decision procedure *also *causes current you to two-box (the oval box on the right). The payoff, then, is calculated as it is in the classic Newcomb's problem, and equals $1000.

However, if the answer to the question is one-boxing, then every historical you and current you one-box. Omega predicts you one-box, giving you a payoff of $1,000,000.

Even better, if we assume FDT only faces this kind of Omega (and knows how this Omega operates), FDT can easily exploit Omega by one-boxing >50% of the time and two-boxing in the other cases. That way, Omega will keep predicting you one-box and fill box B. So when one-boxing, you get $1,000,000, but when two-boxing, you get the maximum payoff of $1,001,000. This way, FDT can get an average payoff approaching $1,000,500. I learned this from a conversation with one of the original authors of the FDT paper, Nate Soares (So8res [LW · GW]).

So FDT solves the above version of Newcomb's problem beautifully, and subjunctive dependence is very much in play here. Casper however offers his own definition of subjunctive dependence:

I should consider predictorPto “subjunctively depend” on agentAto the extent thatPmakes predictions ofA’s actions based on correlations that cannot be confounded by my choice of what source codeAruns.

I have yet to see a problem of decision theoretic significance that falls within *this *definition of subjunctive dependence, but outside of Yudkowsky and Soares' definition. Furthermore, subjunctive dependence isn't always about predicting future actions, so I object to the use of "predictor" in Casper's definition. Most importantly, though, note that for a decision procedure to have any effect on the world, something or somebody must be computing (part of) it - so coming up with a Newcomb-like problem where your decision procedure has an effect at two different times/places *without *Yudkowsky and Soares' subjunctive dependence being in play seems impossible.

## 1 comments

Comments sorted by top scores.

## comment by scasper · 2022-01-21T05:45:19.060Z · LW(p) · GW(p)

Thanks for this post, I think it has high clarificational value and that your interpretation is valid and good. In my post, I failed to cite Y&S's actual definition and should have been more careful. I ended up critiquing a definition that probably resembled MacAskill's definition more than Y&S's, and it seems to have been somewhat of an accidental strawperson. In fairness to me though, Y&S never offered any example with the minimal conditions for SD to apply in their original paper while I did. This is part of what led to MacAskill's counterpost.

This all said, I do think there is something that my definition offers (clarifies?) that Y&S's does not. Consider your example. Suppose I have played 100 Newcombian games and one-boxed each time. Your Omega will then predict that on the 101st, I'll one-box again. If I make decisions independently each time I play the game, then we have the example you presented and which I agree with. But I think it's more interesting if I am allowed to change my strategy. From my perspective as an agent trying to counfound Omega and win, I should not consider Omega's predictions and my actions to subjuntively depend, and my definition would say so. Under the definition from Y&S, I think it's less clear in this situation what I should think. Should say that we're SD "so far"? Probably not. Should I wait until I finish all interaction with Omega and then decide whether or not we were SD in retrospect? Seems silly. So I think my definition may lead to a more practical understanding than Y&S's.

Do you think we're about on the same page? Thanks again for the post.