Resolving human inconsistency in a simple model

post by Stuart_Armstrong · 2017-10-04T15:25:51.053Z · LW · GW · 2 comments

Contents

  Freeze the reward

  
Balance the rewards

  Narrative/frequentist/feature based approach

None
2 comments

Crossposted at the Intelligent Agents Forum.

(Apologies for the formatting, there is no Latex on LessWrong 2.0. yet)

This post will present a simple model of an inconsistent human, and ponder how to resolve their inconsistency.

Let H be our agent, in a turn-based world. Let Rl and Rs be two simple reward functions at each turn. The reward Rl is thought of as being a ‘long-term’ reward, while Rs is a short-term one.

Define Rl(t) as the agent’s Rl reward at turn t (and similarly Rs(t) for Rs). Then, at turn t, the agent H has reward:

  • R(t) = (Sum over τ=0 to ∞) [Rl(t+τ)(γl)^τ + Rs(t+τ)(γs)^τ],

with constants 0<γs<γl≤1.

Essentially the Rs and the Rl have different discount rates, with the reward from Rs fading much faster than than that of Rl. Therefore the agent will be motivated to get the Rs reward, but only if they can get this in the short-term. Sex, drugs, food, and many other pleasures often have these features (though they are, of course, much more complicated).

The inconsistency is that the human will continually reset their R(t) at each turn. If there were a single discount rate, that wouldn’t a problem, as that would just scale the whole reward function, and reward functions, like utility functions, give the same decisions when scaled.

But with two discount rates, this is inconsistent. The agent will try and follow Rl for long-term planning, but this will be disrupted if they encounter an Rs along the way (and then presumably berate themselves for the lack of self-discipline). This can also be seen as a variant of the “humans are composed of multiple subagents” model, with Rs corresponding to short-term greedy subagent.

So, we have a simple and not-completely-implausible model of an inconsistent human. The question is, how do we resolve it? None of the obvious approaches are ideal, but it’s worth looking at their features.

Freeze the reward

This is the most obvious approach: simply freeze the reward, so that the reward at time t′>t is simply the same as reward at time t (though, in the absence of time-travel, the rewards between t and t′ are no longer relevant).

This is the obvious approach; in practice, though, it will become equivalent with simply forgetting about Rs entirely. After a few turns, the exponential shrinkage of the factor (γl/γs) will make Rs’s typical contribution insignificant. So this approach involves destroying one of H’s sources of reward almost entirely.

Balance the rewards

Another approach would be to balance the rewards, set γl=γs, either at the initial value of γl, the initial value of γs, or some other value.

This would make the the reward consistent, but has the opposite problem as the previous approach: the long-term importance of Rs is now massively magnified relative to Rl, so now the long-term plans will prioritise Rs above Rl much more than before.

Narrative/frequentist/feature based approach

Since we’re not supposed to do this, let’s anthropomorphise H. We can imagine that H has some sort of narrative about their existence – they see themselves as being a certain type of person (possibly mainly connected with Rl), who has some quirks/indulgences/sins (possibly mainly connected with Rs).

If they want to extirpate Rs entirely (“sin”), this is the same as the “Freeze the reward” approach. But they may instead prefer to live their life with roughly the same proportion of Rs as before (“quirk”), or slightly less (“indulgence”).

In that case, the new R would be chosen for consequentialist reasons. Not by looking at the individual terms Rl, Rs, γl, and γs, but by looking at the consequences of following R(t) under “typical” circumstances, and designing the new reward to replicate this behaviour (and this distribution of reward features), while allowing more efficiency. This is, in itself, an interesting IRL problem. But it seems to make sense for humans, as we define ourselves a lot by what we do and experience, rather than by the pleasures and choices that lead up to those experiences.

What do people feel about these approaches? If you had this kind of inconsistency in your reward, how would you typically want it to be resolved?

2 comments

Comments sorted by top scores.

comment by magfrump · 2017-10-04T17:23:39.348Z · LW(p) · GW(p)

I'd like to try running some simulations with a model like this, but I'm realizing that the environment needs to be more complicated than just generating a bunch of pairs of random numbers. But it would be interesting to see how different random circumstances cause a model like this to behave differently.

comment by the gears to ascension (lahwran) · 2017-10-04T16:18:28.674Z · LW(p) · GW(p)

I resonate with that description of IRL-at-oneself. I have very much been trying to follow the reasoning process that long term planning needs to inline short-term values, and it's been slowly increasing in how much it pays off. I also like this model of human discounting irrationality more than the smooth hyperbolic discounting one - it feels to me like, reasoning only from architectural priors, we ought to expect more exponential discounting between heavily log-quantized time units, so that rewards that are within the ten second span all evaluate the same, the 10 minute span all evaluate the same, the 100 minute span all evaluate the same, etc.