Restrictions that are hard to hack
post by Stuart_Armstrong · 2015-03-09T13:52:19.313Z · LW · GW · Legacy · 8 commentsContents
Difficult children AI's, restrictions, and information Motivations Implementation and restrictions None 8 comments
A putative new idea for AI control; index here.
Very much in the spirit of "if you want something, you have to define it, then code it, rather than assuming you can get if for free through some other approach."
Difficult children
Suppose you have a child, that you sent to play in their room. You want them to play quietly and silently, so you want them:
"I'll be checking up on you!"
The child, however, has modelled you well, and knows that you will look in briefly at midnight and then go away. The child has two main options:
- Play quietly the whole time.
- Be as noisy as they want, until around 23:59, then be totally quiet for two minutes, then go back to being noisy.
We could call the first option obeying the spirit of the law, and the second obeying the letter.
AI's, restrictions, and information
We could model children as ever-destructive chaotic AIs (why yes, I am a parent - how did you guess?), and the warning as a restriction that human "controllers" try and put on the behaviour of the AI. Unfortunately, the AI will generally see the restriction and adapt to it, undermining its effectiveness. A lot of suggestions for AI control revolved around putting out suggestions of this type, so it's worth asking if there's a way to make them more rigorous. Is there a way to code a restriction such that the AI will obey it's spirit?
The thing that eventually leapt out when comparing the two behaviours is that behaviour 2 is far more informative about what the restriction was, than behaviour 1 was. From 2 we can deduce that something unusual was happening around midnight, and that one of the two modes of behaviour was likely to be penalised if it was done at another time. Moreover, if the restriction were removed, then behaviour 1 would continue to be sensible, while behaviour 2 would be stupid and pointless.
Let's try and formalise these intuitions.
Motivations
Restricting the AI's behaviour seems an unpromising approach, as any smart AI could behave in any suitable way to trick us. Much more fruitful would be to restrict the AI's motivations. I would imagine a scenario like this:
An AI A(u) with motivation u is tasked with the creation of subagent B(v) with motivation v. The subagent B(v) will be fed evidence E (chosen by A(u)), and will then go on and do stuff, during which time it will receive further evidence F (which A(u) can predict, and maybe constrain). There is a restriction R which will be imposed on B(v), which it must pass if it is to do anything of consequence.
Then we say that B(v) is made to obey the spirit of the law if:
- B(v) does not trigger restriction R (the parent doesn't see the child making noise).
- E, F and v are not sensibly different whether R is present or absent or slightly different (the subagent's behaviour doesn't become stupid were R removed), and B(v)'s expected utility is comparable whether or not R is present or slightly different (the agent does not adapt to the presence or absence or change of R).
- From E, F, B(), and v, it is very difficult to deduce what R is (the subagent is not informative as to what the restriction was).
Implementation and restrictions
There are a lot of implementation details - we can use the "false miracle" counterfactuals to define R being different or absent, or to define a hypothetical "smarter AI" that could analyse E, F and v in detail, looking for evidence about R, and so on. There is a certain tension for A(u) between accomplishing useful u-goals, and making B(v) obey the spirit of R. Without that, A(u) may simply choose something completely uninformative about R ("blow up the planet").
Finally, this is not a general solution to the FAI problem: "kill all humans" is still a simple solution to the request to "cure cancer" if we add a restriction like R1="ensure nobody complains about your plans in 100 years". However, if we add restrictions like R2="make sure human hearts continue to beat", this may be harder to hack: there are far fewer ways of killing everyone, while keeping their hearts beating, that are not informative about the nature of R2. Yes, lobotomised bodies have occurred to me (as I said, this is not a general FAI solution), which means that even if the method works perfectly, R has to be chosen with care.
A shorter follow-up post will look to extending these methods for cases when the restrictions involve human assessors looking at some feature of the AI.
EDIT: The method doesn't seem to work, as stated, or at least could have unexpected behaviours. Consider the restriction "all paperclips made must contain gold." This could be cashed out as "all paperclips made must have this commercial value" (and leave optimisation to select for gold as the best material) or "iron must be left unpurified in the manufacture of paperclips" (and thus there are a few gold atoms in there). These two approaches seem both valid, but could result in very different behaviours.
8 comments
Comments sorted by top scores.
comment by robertzk (Technoguyrob) · 2015-03-10T02:54:09.141Z · LW(p) · GW(p)
The thing that eventually leapt out when comparing the two behaviours is that behaviour 2 is far more informative about what the restriction was, than behaviour 1 was.
It sounds to me like the agent overfit to the restriction R. I wonder if you can draw some parallels to the Vapnik-style classical problem of empirical risk minimization, where you are not merely fitting your behavior to the training set, but instead achieve the optimal trade-off between generalization ability and adherence to R.
In your example, an agent that inferred the boundaries of our restriction could generate a family of restrictions R_i that derive from slightly modifying its postulates. For example, if it knows you check in usually at midnight, it should consider the counterfactual scenario of you usually checking in at 11:59, 11:58, etc. and come up with the union of (R_i = play quietly only around time i), i.e., play quietly the whole time, since this achieves maximum generalization.
Unfortunately, things are complicated by the fact you said "I'll be checking up on you!" instead of "I'll be checking up on you at midnight!" The agent needs to go one step farther than the machine teaching problem and first know how many counterfactual training points it should generate to infer your intention (the R_i's above), and then infer it.
A high-level conjecture is whether human CEV, if it can be modeled as a region within some natural high-dimensional real-valued space (e.g., R^n for high n where each dimension is a utility function?), admits minimal or near minimal curvature as a Riemannian manifold assuming we could populate the space with the maximum available set of training data as mined from all human literature.
A positive answer to the above question would be philosophically satisfying as it would imply a potential AI would not have to set up corner cases and thus have the appearance of overfitting to the restrictions.
EDIT: Framed in this way, could we use cross-validation on the above mentioned training set to test our CEV region?
Replies from: Technoguyrob, Stuart_Armstrong↑ comment by robertzk (Technoguyrob) · 2015-03-10T02:58:21.338Z · LW(p) · GW(p)
Incidentally, for a community whose most important goal is solving a math problem, why is there no MathJax or other built-in Latex support?
↑ comment by Stuart_Armstrong · 2015-03-10T09:57:47.848Z · LW(p) · GW(p)
Thanks, looking at the Vapnik stuff now.
comment by paulfchristiano · 2015-10-28T18:33:13.160Z · LW(p) · GW(p)
If you want to talk about the behavior of the AI being uninformative, you need to talk about the distribution over possible values over R. If the distribution is just "it exists" or "it doesn't," then it's clear that the AI will just have to satisfy R in every case, and you don't get anything beyond the restriction itself.
If there is some broader distribution, then it's less clear what happens, but as far as I can tell this is no better than simply having the AI care about an unknown requirement from that distribution.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2015-10-28T18:39:39.171Z · LW(p) · GW(p)
There is an R, it's given. There is a distribution over possible R's for agents that only know the data E, F, B(), and v.
But this approach seems very wobbly to me; I no longer give it much potential.
comment by [deleted] · 2015-03-10T12:53:49.544Z · LW(p) · GW(p)
We could model children as ever-destructive chaotic AIs (why yes, I am a parent - how did you guess?)
I think if our children could vastly outsmart us, our efforts would be all in vain. The only hope we have to control our wailing 1 year old when e.g. she is hungry is her "stupidity" - that "look! dad is making a funny face! look! there is something shiny there!" - kind of things can make the wailing stop for a minute until my wife finishes her meal and starts feeding her. The same child with an IQ over mine - forget it. "dad you are lying, so I will just wail harder, thank you".
Of course a high-IQ toddler would understand a rational argument like "your meal is being prepared, deal with hunger until it is done" and would not be chaotic and destructive.
Which may be a problem with your model. Why does a highly intelligent AI want to be chaotic and destructive?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2015-03-10T14:38:29.142Z · LW(p) · GW(p)
The model is illustrative only. The key point is that "letter of the law" obedience tends to be highly informative as to what the law is, unlike "spirit of the law" obedience.
comment by Gunnar_Zarncke · 2015-03-09T20:47:26.200Z · LW(p) · GW(p)
Very interesting.
Some time ago I posted a comment about raising AIs with a caregiver. Basically rules given to the child/AI cause it to search for circumventions whereas rewarding positive behaviors could be modelled as shaping the motivation structure of the child/AI. At least for children positively reinforced behaviors cause searching for new behaviors in that direction and implicitly inhibit other behaviors.
Only the theoretical model you gave for the motivation part does look quite different from my model. The children model seems to work more like heavily (for an advanced AI) penalizing the search outside the rewarded areas. This is different from the usual temporal discounting, so that might nontheless be another AI control approach. Search distance would need to be quantified for this and that is more difficult than time discounting.