Dangerous optimisation includes variance minimisation

post by Stuart_Armstrong · 2021-06-08T11:34:04.621Z · LW · GW · 5 comments

Contents

  Variance control
  Conclusion
None
5 comments

Let's look again at Stuart Russell's quote:

A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable.

It is not immediately obvious that this is true. If we are maximising (or minimising) some of these variables, then it's likely true, if this maximisation or minimisation brings them to unusual values that we wouldn't encounter "naturally". But things might be different if the variables need only be set to certain "plausible" values.

Suppose, for example, that an AI is building widgets, and that it is motivated to increase widget production . It can choose the following policies, with the following consequences:

  1. : build widgets the conventional way; .
  2. : build widgets efficiently; .
  3. : introduce new innovative ways of building widgets; .
  4. : dominate the world's widget industry; .
  5. : take over the world, optimise the universe for widget production; .

If the AI's goal is to maximise without limit, then the fifth option becomes attractive to it. Even if it just wants to set to a limited but high value - - it benefits from more control of the world. In short:

But what if the AI was designed to set to , or in the range ? Then it would seem that it has lower incentive for control, and might just "do its job", the way we'd like it to; the other variables would not be set to extreme values, since there is no need for the AI to change things much.

Eliezer and Nick and others have made the point that this is still not safe, in posts that I can't currently find. They use examples like the AI taking over the world and building cameras to be sure that it constructed widgets exactly. These scenarios seem extreme as intuitions pumps, to some, so I thought it would be simpler to rephrase this as: moving the variance to unusual values.

Variance control

Suppose that the AI was designed to keep at . We could give it the utility function , for instance. Would it then stick to policy ?

Now assume further that the world is not totally static. Random events happen, increasing or decreasing the production of widgets. If the AI follows policy , then its expected reward is:

The second term, , the AI could control by "doing its job" and picking a human-safe policy. But is also wants to control the variance of , specifically it wants to lower it. Even more specifically, it wants to move that variance to a very low, highly unusual value.[1]

So the previous problem appears again: it wants to move a variable - the variance of - to a very unusual value. In the real world, this could translate to it building excess capacity, taking control of its supply chains, removing any humans that might get in the way, etc... Since "humans that might get in the way" would end up being most humans - few nations would tolerate a powerful AI limiting their power and potential - this tends to the classic "take control of the world" scenario.

Conclusion

So, minimising or maximising a variable, or setting it to an unusual value, is dangerous, as it incentives the AI to take control of the world to achieve those unusual values. But setting a variable to a usual value can also be dangerous, in an uncertain world, as it incentivises the AI to take control of the world to set the variability of that variable to unusually low levels.

Thanks to Rebecca Gorman for the conversation which helped me clarify these thoughts.


  1. This is not a specific feature of using a square in . To incentivise the AI to set , we need a function of that peaks at . This makes it concave-ish around , which is what penalises spread and uncertainty and variance. ↩︎

5 comments

Comments sorted by top scores.

comment by Steven Byrnes (steve2152) · 2021-06-08T12:19:57.212Z · LW(p) · GW(p)

I agree! I'm 95% sure this is in Superintelligence somewhere, but nice to have a more-easily-linkable version.

comment by Dagon · 2021-06-08T15:49:32.256Z · LW(p) · GW(p)

Why do we believe that we have variables that nobody cares about?  Shouldn't the objective include all variables, even if some are fairly low-weighted in common ranges?

Replies from: BossSleepy
comment by Randomized, Controlled (BossSleepy) · 2021-06-09T14:46:02.296Z · LW(p) · GW(p)

Why do we believe that we have variables that nobody cares about?

Nobody believes this, however we don't have a way to express all the things we care about in math or code yet.

comment by Lionel Levine · 2021-07-16T16:57:11.740Z · LW(p) · GW(p)

This is an obvious point, but: Any goal is likely to include some variance minimization as a subgoal, if only because of the possibility that another entity (rival AI, nation state, company) with different goals could take over the world.  If an AI has the means to take over the world, then it probably takes seriously the scenario that a rival takes over the world. Could it prevent that scenario without taking over itself?

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2021-07-16T18:27:19.372Z · LW(p) · GW(p)

This is a variant of my old question:

  • There is a button at your table. If you press it, it will give you absolute power. Do you press it?

More people answer no. Followed by:

  • Hitler is sitting at the same table, and is looking at the button. Now do you press it?