Some work on connecting UDT and Reinforcement Learning
post by IAFF-User-111 (Imported-IAFF-User-111) · 2015-12-17T23:58:25.000Z · LW · GW · 5 commentsThis is a link post for https://drive.google.com/file/d/0B4YvzzJCIsowc1VCYWdBRXlHMnc/view?usp=sharing
Contents
5 comments
5 comments
Comments sorted by top scores.
comment by paulfchristiano · 2015-12-30T18:08:24.000Z · LW(p) · GW(p)
In order to distinguish EDT or UDT from CDT, we need to consider inferences from the choice policy other than the causal effects of the agent following it. For example, we need to exploit some correlation between the agent's choice of policy and the choices or beliefs of another agent. This seems to require a richer model than we usually use in reinforcement learning.
If we don't have any nontrivial interaction between the policy and the outcome, then EDT/CDT will collapse. This seems to happen in your setting.
I don't see a good way to construct a suitable practical model, though I haven't thought about it too much.
Replies from: paulfchristiano, Imported-IAFF-User-111↑ comment by paulfchristiano · 2016-01-29T19:36:18.000Z · LW(p) · GW(p)
The difference between EDT and CDT only appears when there are non-causal correlations between the environment and agent's choice of policy. But in the setting you described, the only impact of the policy is on the agent's actions, which then causally affect the environment. In this setting, EDT always makes the same recommendation as CDT, since conditioning is the same as causal intervention.
UDT also makes the same choices as EDT, because the "behavior after observing X" only affects what happens after observing X. So it doesn't matter whether we update.
I might be missing some aspect of the model that correlates decisions and outcomes though.
One candidate is forgetfulness. If you don't keep a record of your past states, then your decision has an impact both on what happens in the future, but it also affects your beliefs about what state you are currently in. So I guess if we have forgetting, then the model you described can capture a difference.
It seems like RL with forgetting is generally pretty subtle subject. As far as I can tell direct policy search is the only algorithm people use in this setting. If episodes are independent, this is the same as UDT (not CDT).
Replies from: Imported-IAFF-User-111↑ comment by IAFF-User-111 (Imported-IAFF-User-111) · 2016-02-11T21:40:11.000Z · LW(p) · GW(p)
"But in the setting you described, the only impact of the policy is on the agent’s actions"
I don't think so. P_M(\zeta | \pi) is meant to describe the distribution over trajectories given a policy (according to the model). Unless I'm missing something, the model could contain non-causal correlations.
Replies from: paulfchristiano↑ comment by paulfchristiano · 2016-02-17T23:43:25.000Z · LW(p) · GW(p)
I see; you're right.
You mention that could reflect the true dynamics of the environment; I read that and assumed it was a causal model mapping a (state, action) pair to the next state. But if it captures a more general state of uncertainty, then this does pick up the difference between EDT/UDT and CDT.
Note that if reflects the agent's logical uncertainty about its own behavior, then we can't generally expand the expectations out as an integral over possible trajectories.
For example, if I train one model to map x to E[y(x)|x], and I train another model to map (x,b) to P(y(x)=b), then the two quantities won't generally be related by integration.
When thinking about the connection between theoretical frameworks and practical algorithms, I think this would be an interesting issue to push on.
↑ comment by IAFF-User-111 (Imported-IAFF-User-111) · 2016-01-27T23:53:10.000Z · LW(p) · GW(p)
I don't understand why you say:
-
it "seems to require a richer model than we usually use in [RL]".
-
"This seems to happen in your setting."
-
Are you suggesting that a model as I've defined it is not satisfactory/sufficient for some reason?
-
can you elaborate a bit?