Is EDT correct? Does "EDT" == "logical EDT" == "logical CDT"?
post by Vivek Hebbar (Vivek) · 2023-05-08T02:07:17.602Z · LW · GW · 1 commentThis is a question post.
Contents
Answers 5 Max H None 1 comment
Main question: Does EDT / "logical EDT"[1] make any wrong decisions, once the "tickle defense" is accounted for? And is there any real distinction between EDT, "logical EDT", and "logical CDT"?
Relatedly, I'm having trouble constructing any case which distinguishes logical EDT from some kind of "logical CDT" where some facts are "upstream" of others. Does anyone have an example where:
- There is a logical fact which could plausibly be upstream of both the agent's decision and some other thing which feels like it "should" be unrelated.
My failed attempt at this:
- "There is some law of physics which affects both the agent's decision to chew gum and the number of planets in the universe. The unknown logical fact is the prevalence of that law in the Solomonoff prior."
- I think this fails by the tickle defense. It seems plausible for a law of physics to affect the prevalence of gum-chewing agents via {physics -> evolution -> values/impulses}. However, I claim that it "should" cease to correlate with the agent's decision once you condition on what the agent already knows about its own thinking process.[2]
It also seems like EDT is not really distinct from logical EDT, unless you make some special effort to separate logical and factual uncertainty?
- Suppose we train a predictor using e.g. supervised learning on past trajectories. Conditioning on action A also gives the predictor informations of logical facts which affected the decision. I claim that the predictor will correctly update in a "logically evidential" way -- it will use the new information to update on other logical facts and will do so in a purely "conditional" way.
- It feels like you'd have to make weird design choices to end up with a "non-logical" EDT agent (if that's even a well-defined concept):
- If your world model is freely learned, there's no reason for it to make a distinction between logical and factual uncertainty.
- If your world model explicitly represents them separately, then you'd have to just not design a Bayesian update procedure for {observation -> updated logical facts}. But it seems like such a procedure would be clearly desirable.
- If your world model is explicitly forbidden from modelling logical uncertainty... what? why would you do that? and how?
I'm not up to speed on the literature, but hopefully people will surface things here if they're relevant.
- ^
Where by "logical EDT" I mean "EDT, but the world-model and conditioning procedure includes logical uncertainity". I don't know if this is distinct from just saying "EDT".
- ^
The details here are a bit complicated and disputable. Imagine an agent whose decision procedure works as follows:
1. Think about the decision using EDT, and come to some conclusion.
2. Randomly ignore the conclusion X% of the time, and go with some "impulsive" decision. Otherwise follow the conclusion.
Now, it is totally possible for the agent's decision to have "weird correlations" with various other things (like the laws of physics), via the nature of the "impulsive" decision.
However, I normatively claim that the EDT reasoning should consider itself to be deciding its own conclusion, not deciding the agent's final decision.
(Thanks to Linh Chi Nguyen for previous discussion on this point.)
Answers
How would you apply EDT + tickle defense to decide not to show a low card when playing the counterfactual mugging poker game [LW · GW]? It seems like the EDT agent doesn't consider counterfactuals, and thus would choose to reveal the low card when it is dealt that card, getting a higher payoff at a cost to its counterfactual selves. Maybe I'm missing something obvious about how to apply the tickle defense to this situation though.
1 comment
Comments sorted by top scores.
comment by Chris_Leong · 2023-05-08T11:59:39.493Z · LW(p) · GW(p)
I’m probably missing something obvious, but how does the tickle defence handle X-Or Blackmail?