XOR Blackmail & Causalitypost by abramdemski · 2017-11-15T04:24:18.960Z · score: 9 (2 votes) · LW · GW · 4 comments
I edited my previous post to note that I’m now much less optimistic about the direction I was going in. This post is to further elaborate the issue and my current position.
Counterfactual reasoning is something we don’t understand very well, and which has so many free parameters that it seems to explain just about any solution to a decision problem which one might want to get based on intuition. So, it would be nice to eliminate it from our ontology – to reduce the cases in which it truly captures something important to machinery which we understand, and write off the other cases as “counterfactual-of-the-gaps” in need of some other solution than counterfactuals.
My approach to this involved showing that, in many cases, EDT learns to act like CDT because its knowledge of its own typical behavior screens off the action from the correlations which are generally thought to make EDT cooperate in one-shot prisoner’s dilemma with similar agents, one-box in Newcomb’s problem, and so on. This is essentially a version of the tickle defense. I also pointed out that the same kind of self-knowledge constraint is needed to deal with some counterexamples to CDT; so, CDT can’t be justified as a way of dealing with cases of failure of self-knowledge in general. Instead, CDT seems to improve the situation in some cases of self-knowledge failure, while EDT does better in other such cases.
This suggests a view in which the self-knowledge constraint is a rationality constraint, so the tickle defense is thought of as being true for rational agents, and CDT=EDT under these conditions of rationality. I suggested that problems for which this was not true had to somehow violate the ability of the agent to perform experiments in the world; IE, the decision problem would have to be set up in such a way as to prevent the agent from decorrelating its actions from things in the environment which are not causally downstream of its actions. This seems in some sense unfair, as the environment is preventing the agent from correctly learning the causal relationships through experimentation. I called this condition the law of logical causality when it first occurred to me, and mixed-strategy implementability in the setup where I proved conditions for CDT=EDT.
In XOR Blackmail with a perfect predictor, however, mixed-strategy implementability is violated in a way which does not intuitively seem unfair. As a result, knowledge of what sort of thing you do in XOR blackmail is not sufficient to decorrelate your actions from things which you have no control over. Constraining to the epsilon-exploration case, so that conditional probabilities are well-defined, it seems like what happens is that the epsilon-exploration bit correlates the action you take with the disaster (thanks to the XOR which determines if the letter is sent). On the other hand, it seems as if CDT should be able to get the right answer.
However, I’m unable to come up with a causal Bayes net which seems to faithfully represent the problem, so that I can properly compare how CDT and EDT reason about it in the same representation. It seems like the letter has to be both a parent and a child of the action. I thought I could represent things properly by having a copy of the action node, representing the simulation of the agent which the predictor uses to predict; but, I don’t see how to represent the perfect correlation between the copy and the real action without effectively severing the other parents of the real action.
Anyone have ideas about how to represent XOR Blackmail in a causal network?
Here we go. I was confused by the fact that CDT can’t reason as if its action makes the letter not get sent. The following causal graph works well enough:
- A: the action. True if money is sent to the blackmailer.
- A′: a copy of the action, representing the abstract mathematical fact of what the agent does if it sees the letter.
- L: Whether the letter is sent or not.
- D: The rare disaster.
- U: The utility.
- A: Has A′ and L as parents, with the following function: If the letter is sent, copy A′. Otherwise, false.
- A′: No parents.
- L: A′ and D as parents, with the XOR function determining L.
- D: No parents.
- U: A and D as parents, with the utility function as stated in the original XOR post.
Assume epsilon-exploration to ensure that the conditional probabilities are well-defined. Even if EDT knows its own policy, it sees itself as having control over the disaster. CDT, on the other hand, sees no such connection, so it refuses to send the money.
Comments sorted by top scores.