Is it the case that when humans approximate backward induction they violate the markov property?
post by Quinn (quinn-dougherty) · 2021-01-16T16:22:21.561Z · LW · GW · No commentsThis is a question post.
Contents
No comments
Backward induction is the process of reasoning backwards in time, from the end of a problem or situation, to determine a sequence of optimal actions.
Backward induction is used to solve Bellman equations in dynamic programming, which is leveraged in reinforcement learning.
The markov property holds of a process if the probability of each event depends only on the state attained in the previous event, formally if for some .
For an example of a human approximating backward induction, I'll use the chocolate example from Instrumental and Terminal Values [LW · GW]: Alice wants chocolate, so she computes that she needs to go to the store, so she computes that she needs to drive, so she computes that she needs to grab her keys. If it's not obvious how grabbing keys is related to obtaining chocolate, it should be obvious when you look a the whole chain.
Formally perhaps we might say for any goal we can compute a list , where denotes " leads to and we can compute that fact from looking at " or "from the desire for we can compute the desire for ".
But! As Eliezer points out, if you're at the step, and an oracle tells you that the store is out of chocolate, you're not forced into some semi-myopia where you feel the need to start driving but you no longer know why. Instead, the whole chain might collapse at once, radically reconfigure itself from the initial desire for chocolate.
I feel like there's this possibility for a wrong interpretation of backward induction that's analogous to the markov property, where every desire in the chain is a function of the preceding desire to which it is instrumental. This wrong interpretation actually lies precisely in a blindspot of my previous formalization! Put much better, though a little difficult on the eyes:
Formally perhaps we might say that for any goal we can compute a list of shrinking lists , where denotes " leads to , which we can compute by looking at the sequence , (notice recursion, base case )" or "from a desire for the list we can compute the desire for ".
This formalism is robust to disruptions in the chain mid-planning, because the whole chain is input to compute a preceding instrumental goal.
So I have a basic belief about this, and I want to find a way to verify if it's true: Is it the case, when you're approximating backward induction IRL, that as you traverse from right to left you're gaining information? That is, every desire toward the left carries with it not only information about the desire immediately to it's right but also about the entire rest of the list up to the base case / terminal value, as in the second formalism?
Answers
No comments
Comments sorted by top scores.