Beyond algorithmic equivalence: self-modelling

post by Stuart_Armstrong · 2018-02-28T16:55:55.161Z · LW · GW · 3 comments

Contents

  Self-modelling
  Self-model and preparation 
  The philosophical position
None
3 comments

In the previous post, I discussed ways that the internal structure of an algorithm might, given the right normative assumption, allow us to distinguish bias from reward.

Here I'll be pushing the modelling a bit further.

Self-modelling

Consider the same toy anchoring biased problem as the previous post, with the human algorithm , some object , a random integer , and an anchoring bias given by

for some valuation function that is independent of .

On these inputs, the internal structure of is:

However, is capable of self-modelling, to allow it to make long term decisions. At time , models itself at time as:

Note that is in error here: it doesn't take into account the influence of on its own behaviour.

In this situation, it could be justifiable to say that 's self model is the correct model of its own values. And, in that case, the anchoring bias can safely be dismissed as a bias.

Self-model and preparation

Let's make the previous setup a bit more complicated, and consider that, sometimes, the agent is aware of the effect of , and sometimes they aren't.

At time , they also have an extra action choice: either , which will block its future self from seeing , or , which will proceed as normal. Suppose further that whenever is aware of the effect of , they take action :

And when isn't aware of the effect of , they don't take any action/takes :

Then it seems very justifiable to see as opposing the anchoring effect in themselves, and thus classifying it as a bias rather than a value/preference/reward.

The philosophical position

The examples in this post seem stronger than in the previous one, in terms of justifying "the anchoring bias is actually a bias".

More importantly, there is a philosophical justification, not just an ad hoc one. We are assuming that has a self model of their own values - they have a model of what is a value and what is a bias in their own behaviour.

Then we can define the reward of , as the reward that models itself as having.

In subsequent posts, I'll explore whether this definition is justified, how to access these self-models, and what can be done about errors and contradictions in self-models.

3 comments

Comments sorted by top scores.

comment by RyanCarey · 2018-03-03T10:35:00.614Z · LW(p) · GW(p)

I agree that the agent should be able to make a decent effort at telling us which of its drives are biases (/addictions) versus values. One complicating factor is that agents change their opinions about these matters over time. Imagine a philosopher who uses the drug heroin. They may very well vacillate on whether heroin satisfies their full-preferences, even if the experience of taking heroin is not changing. This could happen via introspection, via philosophical investigation, via examining fMRI scans, et cetera. It's tricky for the human to state their biases with confidence because they may never know when they are done updating on the matter.

Intuitively, an agent might want the AI system to do this examination and then to maximize whatever turns out to be valuable. That is, you might want the bias-model to be the one that you would settle on if you thought for a long time, similarly to enlightened self-interest / extrapolated volition models. Similar problems ensue: e.g., it this process may diverge. Or it may be fundamentally indeterminate whether some drives are values or biases.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2018-03-05T12:43:58.051Z · LW(p) · GW(p)

>One complicating factor is that agents change their opinions about these matters over time.

Yep! This is one of the major issues, and one that I'll try to model in a soon-to-be-coming post. The whole issue of rigged and influeceable learning processes is connected with trying to learn the preferences of such an agent.

>Or it may be fundamentally indeterminate whether some drives are values or biases.

I think it's fundamentally indeterminate in principle, but we can make some good judgements in practice.

comment by Gordon Seidoh Worley (gworley) · 2018-02-28T19:42:12.630Z · LW(p) · GW(p)

Ooooh, I like where this is going. I realize you still have more to develop on this idea, but is your thought that this could replace the use of objective reward functions that exist outside the agent?