Agents that don't become maximisers

post by Stuart_Armstrong · 2017-04-07T12:56:55.747Z · LW · GW · Legacy · 15 comments

Contents

  Intransitive example
  Stability and information
  Why are statificers different?
  Is this design plausible?
None
15 comments

Cross-posted at the Intelligent Agent forum.

According to the basic AI drives thesis, (almost) any agent capable of self-modification will self-modify into an expected utility maximiser.

The typical examples are the inconsistent utility maximisers, the satisficers, unexploitable agents, and it's easy to think that all agents fall roughly into these broad categories. There's also the observation that when looking at full policies rather than individual actions, many biased agents become expected utility maximisers (unless they want to lose pointlessly).

Nevertheless... there is an entire category of agents that generically seem to not self-modify into maximisers. These are agents that attempt to maximise f(E(U)) where U is some utility function, E(U) is its expectation, and f is a function that is neither wholly increasing nor decreasing.

Intransitive example

Let there be a U with three action a0, a5, and a10 that set U to 0, 5, and 10, respectively.

The function f is 1 in the range (4,6) and is 0 elsewhere. Hence the agent needs to set the expectation of U to be in that range.

What will happen is that one action will be randomly removed from the set, and the agent will then have to choose among the remaining two actions. What possible policies can the agent take?

Well, there are three option sets the agent could face - (a0, a5), (a5, a10), and (a10, a0) - each with two options and hence 23=8 pure policies. Two of those policies - choosing always the first option in those ordered pairs, or choosing always the second option - are intransitive, as they rank no option above the other two.

But actually those intransitive options have an expected utility of (0+5+10)/3 = 5, which is just what the agent wants.

Even worse, none of the other (transitive) policies are acceptable. You can see this because each of the six transitive policies can be reached by taking one of the intransitive policies and flipping a choice, which must change the expected utility by ±5/3 or ±10/3, moving it out of the (4,6) range.

Thus there are no possible expected utility maximalisation that correspond to these options, as such maximalisations are always transitive.

Or another way of seeing this: the random policy of picking an action randomly has an expectation of (0+0+5+5+10+10)/6 = 5, so is also an acceptable policy. But for expected utility maximalisation, if the random policy is acceptable, then so is every other policy, which is not the case.

 

Stability and information

The agent defined above is actually stable under self-modification: it can simply wait till it knows which action is going to be removed, and then pick a5 in both cases where this is possible, and choose randomly between a0 and a10 if that pair comes up. And that's what it would do if it faced any of those three choice from the start.

But that's an artefact of the options in the setup. If instead the actions had been a0, a4, and a11, then all the previous results would remain valid, but the agent would want to self modify (if only to deal with the (a0, a4) option).

What about information? Is it always good for the agent to know more?

Well, if the agent can self-modify before receiving extra information, then extra information can never be a negative (trivial proof: the agent can self-modify to ignore the information if it were negative to know).

But if the agent cannot self-modify before receiving the information, then it can sometimes pay to not learn or to forget some things. For instance, maybe there was an extra piece of information that informed the agent of the utilities of the various actions; then the agent might want to erase that information simply so its successor would be tempted to choose randomly.

 

Why are statificers different?

Note that this framework does not include satisficers, who can be seen as having a c such that g(u)=0 for u < c and g(u)=1 for u ≥ c, and maximising g(E(U)).

But this g is an increasing (step) function, and that makes all the difference. A expected utility maximiser choosing between policies p and q will pick p if E(U|p) > E(U|q). If g is increasing, then g(E(U|p)) ≥ g(E(U|q)), so such a choice is also permissible to a satisficer. The change from > to ≥ is why satisficers can become maximisers (maximising is compatible with satisficing) but not the opposite.

 

Is this design plausible?

It might seem bizarre to have an agent that restricts expected utility to a particular range, but it's actually quite sensible, at least intuitively.

The problem with maximisers is that the extreme optimised policy is likely to include dangerous side-effects we didn't expect. Satisficers were supposed to solve this, by allowing the agent to not focus only on the extreme optimised policy, but their failure mode is that they don't *preclude* following such a policy. Hence this design might be felt to be superior, as it also rules out the extreme optimised policies.

Its failure mode, though, is that it don't preclude, for instance, a probabilistic mix of extreme optimised policy with a random inefficient one.

15 comments

Comments sorted by top scores.

comment by whpearson · 2017-04-07T17:09:13.951Z · LW(p) · GW(p)

You could make f take a tuple of the expectation and the variance if you wanted to and provide ranges for both of them.

FWIW I don't find these discussions likely to be useful. I feel they are sweeping what U is (how it is built, and thus how it changes, is it maximised?) under the carpet and missing some things because of it.

To explain what I mean if U refers in someway to a humans existence/happiness you have to be able to classify primary sense data cameras etc as :

  • humans from non-human dolls/animatronics
  • dead humans from one that is in a coma (what does it mean to be alive?)
  • a free happy human from one that is in prison trying to put on a brave face (what is eudamonia)

Encoding solutions to a bunch of philosophical questions just doesn't seem like a realistic way to do things!

We do not start with Super Intelligence we start with nothing and have to build an AGI. We cannot rely on an pre-human being smart enough to self-improve without mucking up U (I'm not sure we can rely on superhuman either). So we have to do it. The more content we have to put in an unchanging U the harder it will be to make it (and make it bug free).

It seems to preclude all the kind of learning that humans do in this field. We do things differently so there is obviously something that is being missed.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2017-04-10T06:12:59.512Z · LW(p) · GW(p)

This is intended to be a counterexample to some naive assumptions about self-modifying systems.

comment by tukabel · 2017-04-11T07:11:38.015Z · LW(p) · GW(p)

The problem is that already the existing parasites (plants, animals, wall street, socialism, politics, state, you name it) usually have absolutely minimal self control mechanisms (or plain zero) and maximize their utility functions till the catastrophic end (death of the host organism/society).

Because... it's so simple, it's so "first choice". Viruses don't even have to be technically "alive". No surprise that we obviously started with computer viruses as the first self-replicators on the new platform.

So we can expect zilions of fast replicating "dumb AGI" ( dAGI) agents maximising all sorts of crazy things before we get anywhere near the "intelligent AGI" (iAGI). And these dumb parasitic AGIs can be much more dangerous than that mythical singularitarian superhuman iAGI. May never even come if this dAGI swarm manages to destroy everything. Or attack iAGI directly.

In general, these "AI containing", "aligning" or "friendly" idealistic approaches look dangerously naive if they are the only "weapon" we are spupposed to have... maybe these should be complemented with good old military option (fight it... and it will come when goverment/military forces jump into the field). Just in case... to be prepared it things go wrong (sure, there's this esotheric argument that you cannot fight very advanced AGI, but at least this "very" limit deserves further study).

Replies from: Lumifer
comment by Lumifer · 2017-04-11T14:25:46.938Z · LW(p) · GW(p)

The problem is that already the existing parasites (plants, animals, wall street, socialism, politics, state, you name it) usually have absolutely minimal self control mechanisms (or plain zero) and maximize their utility functions till the catastrophic end (death of the host organism/society).

This is false.

comment by Yosarian2 · 2017-04-10T13:45:05.334Z · LW(p) · GW(p)

Its failure mode, though, is that it don't preclude, for instance, a probabilistic mix of extreme optimised policy with a random inefficient one.

I think there is a more serious failure mode here.

If a AI wants to keep a utility function within a certain range, what's to stop it from dramatically increasing it's own intelligence, access to resources, ect towards infinitely just to increase the probability of staying within that range in the future from 99.9% up to 99.999%? You still might run into the same "instrumental goals" problem.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2017-04-11T08:53:41.408Z · LW(p) · GW(p)

I'd call that a mix of extreme optimised policy with inefficiency (not in the exact technical sense, but informally).

There's nothing to stop the agent from doing that, but it's also not required. This is expected utility we're talking about, so "expected utility in the range 0.8-1" is achieved - with certainty - by a policy that has a 90% probability of achieving 1 utility (and 10% of achieving 0). You may say there's also a tiny chance of the AI's estimates being wrong, its sensors, its probability calculation... but all that would just be absorbed into, say, a 89% chance of success.

In a sense, this was the hope for the satisficer - that it would do a half-assed effort. But it can choose to do a optimal maximising policy instead. This type of agent can also choose a maximising-style policy, but mix it with deliberate inefficiency. ie it isn't really any better.

Replies from: Yosarian2
comment by Yosarian2 · 2017-04-11T15:06:09.985Z · LW(p) · GW(p)

Ah, interesting, I understand better now what you're saying. That makes more sense, thank you.

Here's another possible failure mode then; if the AI's goal is just to manipulate it's own expected utility, and it calculates expected utility using some Bayesian method of modifying priors with new information, could it selectively seek out new information to convince itself that what it was already going to do is going to have an expected utility in the range of .8 and game the system that way? I know that sounds strange but humans do stuff like that all the time.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2017-04-12T07:22:44.400Z · LW(p) · GW(p)

I can't bias its information search (looking for evidence for X rather than evidence against it), but it can play on the variance.

Suppose you want to have a belief in X in the 0.4 to 0.6 range, and there's a video tape that would clear the matter up completely. Then not watching the video is a good move! If you currently have a belief of 0.3, then you can't bias your video watching, but you could get an idiot to watch the video and recount it vaguely to you; then you might end up with a higher chance (say 20%) of being in the 0.4 to 0.6 range.

Replies from: entirelyuseless, Yosarian2
comment by entirelyuseless · 2017-04-12T13:36:05.493Z · LW(p) · GW(p)

You can't look for evidence of X rather than evidence against, in the sense of conservation of expected evidence. But this just means that the amount of the move multiplied by the probability will be equal. It does not mean that the probability of finding evidence in favor is 50% and the probability of finding evidence against is 50%. So in that sense, you can indeed look for evidence in favor, by looking in places that have a very high probability of evidence in favor and low probability of evidence against; it is just that if you unluckily happen to find evidence against, it will be extremely strong evidence against.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2017-04-13T10:32:41.437Z · LW(p) · GW(p)

Yes. I call that informally playing the variance. You want to look somewhere where there is the highest probability of a swing into the range that you want.

comment by Yosarian2 · 2017-04-12T14:24:11.790Z · LW(p) · GW(p)

If it's capable of self-modifying, then it could do weirder things.

For example, let's say the AI knows that news source X will almost always push stories in favor of action Y. (Fox News will almost always push information that supports the argument we should bomb the middle east, The Guardian will almost always push information that supports us becoming more socailist, whatever.) If the AI wants to bias itself in favor of thinking that action Y will create more utility, what if it self-modifies to first convince itself that news source X is a much more reliable source of information then it actually is and to weigh that information more heavily in it's future analysis?

If it can't self-modify directly, it could maybe do tricky things involving only observing the desired information source at key moments with the goal of increasing it's own confidence in that information source, and then once it has modified it's own confidence sufficiently then it looks at that information source to find the information it is looking for.

(Again, this sounds crazy, but keep in mind humans do this stuff to themselves all the time.)

Ect. Basically what this all boils down to is the AI doesn't really care about what happens in the real world, it's not trying to actually accomplish a goal; instead it's primary objective is to make itself think that it has an 80% chance of accomplishing the goal (or whatever), and once it does that it doesn't really matter if the goal happens or not. It has a built in motivation to try to trick itself.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2017-04-13T10:37:06.422Z · LW(p) · GW(p)

If it's capable of self-modifying, then it could do weirder things.

Yes, but much weirder than you're imagining :-) This agent design is highly unstable, and will usually self-modify into something else entirely, very fast (see the top example where it self-modifies into a non-transitive agent).

If the AI wants to bias itself in favor of [...], what if it self-modifies to first convince itself that news source [...]

What is the expected utility from that bias action (given the expected behaviour after)? The AI has to make that bias decision while not being biased. So this doesn't get round the conservation of expected evidence.

comment by simon · 2017-04-07T16:53:25.727Z · LW(p) · GW(p)

I would think a satisficer would maximize E(g(U)) not g(E(U)).

I assume you are avoiding maximizing E(f(U) because doing so would result in the AI seeking super-high certainty that U is at the maximum of f, leading to side effects?

Edit: it seems to me that once the AI got the expected value it wanted, it would be incentivized to not seek new information since that would adjust the expected value away from that value. So e.g. it might arrange things so that the expected value conditional on it committing suicide is at the intended level, then commit suicide. Or maybe that's a feature not a bug if we want self-limiting AI?

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2017-04-10T06:14:05.917Z · LW(p) · GW(p)

Maximising E(f(U)) is just expected utility maximisation on the utility function f(U)

Replies from: simon
comment by simon · 2017-04-11T15:33:05.950Z · LW(p) · GW(p)

Yes of course. But generally you would expect a state with maximal f(U) to be different than a state with maximal U - max U state is particularly likely to be a "marketing world" but max f(U) is not, since any world with U at the maximum of f qualifies.