AI-created pseudo-deontology
post by Stuart_Armstrong · 2015-02-12T21:11:45.491Z · LW · GW · Legacy · 35 commentsContents
35 comments
I'm soon going to go on a two day "AI control retreat", when I'll be without internet or family or any contact, just a few books and thinking about AI control. In the meantime, here is one idea I found along the way.
We often prefer leaders to follow deontological rules, because these are harder to manipulate by those whose interests don't align with ours (you could say the similar things about frequentist statistics versus Bayesian ones).
What about if we applied the same idea to AI control? Not giving the AI deontological restrictions, but programming with a similart goal: to prevent a misalignment of values to be disastrous. But who could do this? Well, another AI.
My rough idea goes something like this:
AI A is tasked with maximising utility function u - a utility function which, crucially, it doesn't know yet. Its sole task is to create AI B, which will be given a utility function v and act on it.
What will v be? Well, I was thinking of taking u and adding some noise - nasty noise. By nasty noise I mean v=u+w, not v=max(u,w). In the first case, you could maximise v while sacrificing u completely, it w is suitable. In fact, I was thinking of adding an agent C (which need not actually exist). It would be motivated to maximise -u, and it would have the code of B and the set of u+noise, and would choose v to be the worst possible option (form the perspective of a u-maximiser) in this set.
So agent A, which doesn't know u, is motivated to design B so that it follows its motivation to some extent, but not to extreme amounts - not in ways that might sacrifice some of the values of some sub-part of its utility function, because that might be part of the original u.
Do people feel this idea is implementable/improvable?
35 comments
Comments sorted by top scores.
comment by [deleted] · 2015-02-12T22:23:03.404Z · LW(p) · GW(p)
Stuart, have you looked at AIs that don't have utility functions?
I don't think people want their leaders to follow deontological rules. It's more like "I wish they would follow rule X whenever possible." The last part if pretty important. "When possible" means "when it doesn't lead the negative utility outcomes." Or rather, "if they just followed this rule they'd be more likely to end up in good outcomes vs bad outcomes."
This are all ways of describing a heuristic, not a hard utility rule. No big surprise there, because humans are composed of heuristics, not hard unbreakable rules.
Replies from: Stuart_Armstrong, dxu↑ comment by Stuart_Armstrong · 2015-02-13T12:11:57.320Z · LW(p) · GW(p)
Stuart, have you looked at AIs that don't have utility functions?
They tend not to be stable; and there are a few suggestions floating around. But this design might result in such an AI; it might have a utility function, but wouldn't be a mindless maximiser.
Replies from: None↑ comment by [deleted] · 2015-02-13T17:18:43.558Z · LW(p) · GW(p)
They tend not to be stable.
Yes, well that is a tautology. What do you mean by stable? I assume you mean value-stable, which can be interpreted as maximizes-the-same-function-over-time. Something which does not behave as a utility maximizer therefore is pretty much by definition not "stable". By technical definition, at least.
My point was more that this "instability" is in fact the desirable outcome -- people wouldn't want technical-stability, they'd want perhaps a heuristic machine with sensible defaults and rational update procedures.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2015-02-14T22:35:03.615Z · LW(p) · GW(p)
There are other ways of interpreting value stability; a satisficer is one example. But those don't tend to be stable: http://lesswrong.com/lw/854/satisficers_want_to_become_maximisers/
people wouldn't want technical-stability, they'd want perhaps a heuristic machine with sensible defaults and rational update procedures.
And would those defaults and update procedures remain stable themselves?
Replies from: None↑ comment by [deleted] · 2015-02-15T04:19:02.436Z · LW(p) · GW(p)
There are other ways of interpreting value stability; a satisficer is one example. But those don't tend to be stable
That statement does not make sense. I hope if you read it with a fresh mind you can see why. "There are other ways of defining stable, but they are not stable." Perhaps you need to taboo the word stable here?
And would those defaults and update procedures remain stable themselves?
No, and that's the whole point! Stability is scary. Stability leads to Clippy. People wouldn't want stable. They'd want sensible. Sensible updates its behavior based on new information.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2015-02-15T14:36:02.441Z · LW(p) · GW(p)
Perhaps you need to taboo the word stable here?
"There are some agents that are defined to have constant value systems, where, nonetheless, the value system will drift in practice".
Stability leads to Clippy.
There are many bad stable outcomes. And an unstable update system will eventually fall into one of them, because they're attractor states. To avoid this, you need to define "sensible" in such a way as the agent never enters such states. You've effectively promoting a different kind of goal stability - a zone of stability, rather than a single point. It's not intrinsically a bad idea, but it's not clear that its easer than finding a single idea goal system. And it's very underdefined at his point.
Replies from: None↑ comment by [deleted] · 2015-02-15T17:30:57.997Z · LW(p) · GW(p)
"There are some agents that are defined to have constant value systems, where, nonetheless, the value system will drift in practice".
Ok, we are now quite deep in a threat that started with me pointing out that a constant value system might be a bad thing! People want machines whose actions align with their own morality, and humans don't have constant value systems (maybe this is where we disagree?).
There are many bad stable outcomes. And an unstable update system will eventually fall into one of them, because they're attractor states.
Why don't we seem humans drifting into being sociopaths? E.g. starting as a normal, well adjusted human being and then becoming sociopaths as they get older?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2015-02-16T15:56:58.346Z · LW(p) · GW(p)
Why don't we seem humans drifting into being sociopaths? E.g. starting as a normal, well adjusted human being and then becoming sociopaths as they get older?
That's an interesting question, partially because we'd want to copy that and implement it in AI. A large part of it seems to be social pressure, and lack of power: people must respond to social pressure, because they don't have the power to ignore it (a superintelligent AI would be very different, as would a superintelligent human). This is also connected with some evolutionary instincts, which cause us to behave in many ways as if we were in a tribal society with high costs to deviant behaviour - even if this is no longer the case.
The other main reason is evolution itself: very good at producing robustness, terrible at efficiency. If/when humans start self modifying freely, I'd start being worried about that tendency for them too...
↑ comment by dxu · 2015-02-13T23:12:22.722Z · LW(p) · GW(p)
Stuart, have you looked at AIs that don't have utility functions?
Such AIs would not satisfy the axioms of VNM-rationality, meaning their preferences wouldn't be structured intuitively, meaning... well, I'm not sure what, exactly, but since "intuitively" generally refers to human intuition, I think humanity probably wouldn't like that.
Replies from: None↑ comment by [deleted] · 2015-02-13T23:18:33.479Z · LW(p) · GW(p)
Since human beings are not utility maximizes and intuition is based on comparison to our own reference class experience, I question your assumption that only VNM-rational agents would behave intuitively.
Replies from: dxu↑ comment by dxu · 2015-02-15T19:06:04.371Z · LW(p) · GW(p)
I'm not sure humans aren't utility maximizers. They simply don't maximize utility over worldstates. I do feel, however, that it's plausible humans are utility maximizers over brainstates.
(Also, even if humans aren't utility maximizers, that doesn't mean they will find the behavior other non-utility-maximizing agents intuitive. Humans often find the behavior of other humans extraordinarily unintuitive, for example--and these are identical brain designs we're talking about, here. If we start considering larger regions in mindspace, there's no guarantee that humans would like a non-utility-maximizing AI.)
comment by [deleted] · 2015-02-14T11:12:32.028Z · LW(p) · GW(p)
What's Al control retreat? (french..)
Replies from: None, Stuart_Armstrong↑ comment by Stuart_Armstrong · 2015-02-15T14:32:26.106Z · LW(p) · GW(p)
Une periode de receillement et de mediation dans un monastere (au figuree ^_^).
Replies from: None↑ comment by [deleted] · 2015-02-15T15:00:22.878Z · LW(p) · GW(p)
Thanks for the translation :) are you french?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2015-02-16T15:50:41.050Z · LW(p) · GW(p)
J'ais passe le secondaire en France (pres de Geneve) :-)
Replies from: Nonecomment by Toggle · 2015-02-13T20:19:49.872Z · LW(p) · GW(p)
Made me think of Rawl's veil of ignorance, somewhat. I wonder- is there a whole family of techniques along the lines of "design intelligence B, given some ambiguity about your own values", with different forms or degrees of uncertainty?
It seems like it should avoid extreme or weirdly specialized results (i.e. paper-clipping), since hedging your bets is an immediate consequence. But it's still highly dependent on the language you're using to model those values in the first place.
I'm a little unclear on the behavioral consequences of 'utility function uncertainty' as opposed to the more usual empirical uncertainty. Technically, it is an empirical question, but what does it mean to act without having perfect confidence in your own utility function?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2015-02-14T22:38:45.266Z · LW(p) · GW(p)
but what does it mean to act without having perfect confidence in your own utility function?
If you look at utility functions as actual functions (not as affine equivalence classes of functions) then that uncertainty can be handled the usual way.
Suppose you want to either maximise u (the number of paperclips) or -u, you don't know which, but will find out soon. Then, in any case, you want to gain control of the paperclip factories...
Replies from: Toggle↑ comment by Toggle · 2015-02-15T01:54:11.749Z · LW(p) · GW(p)
Well, let's further say that you assign p(+u)=0.51 and p(-u)=0.49, slightly favoring the production of paperclips over their destruction. And just to keep it a toy problem, you've got a paperclip-making button and a paperclip-destroying button you can push, and no other means of interacting with reality.
A plain old 'confident' paperclip maximizer in this situation will happily just push the former button all day, receiving one Point every time it does so. But an uncertain agent will have the exact same behavior; the only difference is that it only gets .02 Points every time it pushes the button, and thus a lower overall score in the same period of time. But the number of paperclips produced is identical. The agent would not (for example) push the 'destroy' button 49 times and the 'create' button 51 times. In practical effect, this is as inconsequential as telling the confident agent that it gets two Points for every paperclip.
So in this toy problem, at least, uncertainty isn't a moderating force. On the other hand, I would intuitively expect different behavior in a less 'toy' problem- for example, an uncertain maximizer might build every paperclip with a secret self-destruct command so that the number of paperclips could be quickly reduced to zero. So there's a line somewhere where behavior changes. Maybe a good way to phrase my question would be- what are the special circumstances under which an uncertain utility function produces a change in behavior?
Replies from: Stuart_Armstrong, somervta↑ comment by Stuart_Armstrong · 2015-02-15T14:31:48.031Z · LW(p) · GW(p)
If the AI expects to know tomorrow what utility function it has, it will be willing to wait, even if there is a (mild) discount rate, while a pure maximiser would not.
Replies from: Toggle↑ comment by Toggle · 2015-02-16T20:23:24.000Z · LW(p) · GW(p)
In the more frequently considered case of a non-stable utility function, my understanding is that the agent will not try to identify the terminal attractor and then act according to that- it doesn't care about what 'it' will value in the future, except instrumentally. Rather, it will attempt to maximize its current utility function, given a future agent/self acting according to a different function. Metaphorically, it gets one move in a chess game against its future selves.
I don't see any reason for a temporarily uncertain agent to act any differently. If there is no function that is, right now, motivating it to maximize paperclips, why should it care that it will be so motivated in the future? That would seem to require a kind of recursive utility function, one in which it gains utility from maximizing its utility function in the abstract.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2015-02-17T14:52:14.733Z · LW(p) · GW(p)
In this case, the AI has a stable utility function - it just doesn't know yet what it is.
For instance, it could be "in worlds where a certain coin was heads, maximise paperclips; in other worlds, minimise them", and it has no info yet on the coin flip. That's a perfectly consistent and stable utility function.
comment by polymathwannabe · 2015-02-13T13:00:43.022Z · LW(p) · GW(p)
This sounded to me as being ruled by two Roman consuls, each of which can override the other's decisions. A part of me likes the idea.
Replies from: gjm, Stuart_Armstrong, Lumifer, alienist↑ comment by Stuart_Armstrong · 2015-02-13T14:28:02.694Z · LW(p) · GW(p)
It's more like: one Roman consul writes the constitution that the other must follow.
↑ comment by Lumifer · 2015-02-13T15:57:18.135Z · LW(p) · GW(p)
his sounded to me as being ruled by two Roman consuls, each of which can override the other's decisions.
Hey, looks like the doctrine of the separation of powers to me. Not a new idea and one that actually has been tried in real life :-)
comment by [deleted] · 2015-02-13T05:09:39.551Z · LW(p) · GW(p)
I think the idea of having additional agents B (and C) to act as a form of control is definitely worth pursuing, though I am not clear how it would be implemented.
Is 'w' just random noise added to the max value of u?
If so, would this just act as a limiter and eventually it would find a result close to the original max utility anyway once the random noise falls close to zero?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2015-02-13T12:13:24.664Z · LW(p) · GW(p)
Specifying v is part of the challenge. But by "noise" I means a whole other utility function added permanently on to u. It would not "fall", it would be a permanent feature of v.
comment by Sergej_Shegurin · 2015-02-19T19:07:28.442Z · LW(p) · GW(p)
In my opinion, the best of proposed solutions for AI safety problem is to make the AI number 1, to tell him that we are going to create another AI (number 2) and ask AI number 1 to tell us how to ensure friendliness and safety of AI number 2, and how to ensure that unsafe AI is not created. This solution has its chances to fail, but still in my opinion it's much better than any other proposed solution. What do you think?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-02-19T19:12:56.032Z · LW(p) · GW(p)
If AI 1 cannot be trusted, any AI it tells us how to build cannot be trusted.