ALBA: can you be "aligned" at increased "capacity"?
post by Stuart_Armstrong · 2017-04-13T19:25:12.000Z · LW · GW · 14 commentsContents
These are not the lands of your forefathers The nature of the problem None 14 comments
I think that Paul Christiano's ALBA proposal is good in practice, but has conceptual problems in principle.
Specifically, I don't think it makes sense to talk about bootstrapping an "aligned" agent to one that is still "aligned" but that has an increased capacity.
The main reason being that I don't see "aligned" as being a definition that makes sense distinct from capacity.
These are not the lands of your forefathers
Here's a simple example: let be a reward function that is perfectly aligned with human happiness within ordinary circumstances (and within a few un-ordinary circumstances that humans can think up).
Then the initial agent - , a human - trains a reward for an agent . This agent is limited in some way - maybe it doesn't have much speed or time - but the aim is for to ensure that is aligned with .
Then the capacity of is increased to , a slow powerful agent. It computers the reward to ensure the alignment of , and so on.
The nature of the agents is not defined - they might be algorithms calling for as subroutines, humans may be involved, and so on.
If the humans are unimaginative and don't deliberately seek out more extreme and exotic test cases, the best case scenario is for as .
And eventually there will be an agent that is powerful enough to overwhelm the whole system and take over. It will do this in full agreement with , because they share the same objective. And then will push the world into extra-ordinary circumstance and proceed to maximise , with likely disastrous results for us humans.
The nature of the problem
So what went wrong? At what point did the agents go out of alignment?
In one sense, at . In another sense, at (and, in another interesting sense, at , the human). The reward was aligned, as long as the agent stayed near the bounds of the ordinary. As soon as it was no longer restricted to that, it went out of alignment, not because of a goal drift, but because of a capacity increase.
14 comments
Comments sorted by top scores.
comment by paulfchristiano · 2017-04-14T08:09:49.000Z · LW(p) · GW(p)
good in practice, but has conceptual problems in principle
I think that my schemes are pretty purely theoretical; for now I'm happy to not even be in the running for "good in practice" :)
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2017-04-14T09:01:51.000Z · LW(p) · GW(p)
In practice, I'd run your design as an attempt to formalise some of the concepts from this post, and figure out how to automate extending human preferences to unusual areas.
comment by paulfchristiano · 2017-04-14T08:06:56.000Z · LW(p) · GW(p)
I think it's very reasonable to question whether my abstractions make any sense, and that this is a likely place for my entire approach to break down.
That said, I think there is definitely a superficially plausible meaning of "aligned at capacity c": it's an agent who is "doing its best" to do what we want. If attacking abstractions I think the focus should be on whether (a) that intuitive concept would be good enough to run the argument, and (b) whether the intuitive concept is actually internally coherent / is potentially formalizable.
If we replace "aligned at capacity " with "is trying its best to do what we want," it seems like this objection falls apart.
If agent is choosing what values agent should maximize, and it picks , and if it's clear to humans that maximizing is at odds with human interests (as compared to e.g. leaving humans in meaningful control of the situation)---then prima facie agent has failed to live up to its contract of trying to do what we want.
Now that could certainly happen anyway. For example, agent could not know how to tell agent to "leave humans in meaningful control of the situation." Or agent could be subhuman in important respects, or could generalize in an unintended way, etc.
But those aren't decisive objections to the concept of "trying to do what we want" or to the basic analysis plan. Those feel like problems we should consider one by one, and which I have considered and written about at least a little bit. If you are imagining some particular problem in this space it would be good to zoom in on it.
(On top of that objection, "aligned at capacity " was intended to mean "for every input, tries to do what we want, and has competence " not to mean "is aligned for everyday inputs." Whether that can be achieved is another interesting question.)
As an aside: I think that benign is more likely to be a useful concept than "aligned at capacity ," though it's still extremely informal.
Replies from: Imported-IAFF-User-214, Stuart_Armstrong↑ comment by IAFF-User-214 (Imported-IAFF-User-214) · 2017-04-15T10:50:31.000Z · LW(p) · GW(p)
If agent is choosing what values agent should maximize, and it picks , and if it’s clear to humans that maximizing is at odds with human interests (as compared to e.g. leaving humans in meaningful control of the situation)—then prima facie agent has failed to live up to its contract of trying to do what we want.
It seems to me that the default outcome for any process like this is always " is at odds with human interests but not in a way that humans will notice until downstream effects of decisions are felt". This framework does not deal with this problem; it is not incorporated into a model of what we want until feedback is received, and the default response to that feedback will be to execute a nearest unblocked strategy like it. (This is especially concerning because a human is not a secure system, and downstream effects that will not be noticed by the human can include accidental or purposeful social/basilisk-like changes to the human's value system. The human being in the loop is only superficially protective.)
↑ comment by Stuart_Armstrong · 2017-04-14T09:15:28.000Z · LW(p) · GW(p)
“is trying its best to do what we want,”
Establishing what humans really want, in all circumstances including exotic ones, and given that humans are very hackable, seems to be the core problem. Is this actually easier than saying "assume the first AI has a friendly utility function"?
Replies from: paulfchristiano↑ comment by paulfchristiano · 2017-04-14T16:02:19.000Z · LW(p) · GW(p)
I don't know what you really want, even in mundane circumstances. Nevertheless, it's easy to talk about a motivational state in which I try my best to help you get what you want, and this would be sufficient to avert catastrophe. This would remain true if you were an alien with whom I share no cognitive machinery.
An example I often give is that a supervised learner is basically trying to do what I want, while usually being very weak. It may generalize catastrophically to unseen situations (which is a key problem), and it may not be very competent, but on the training distribution it's not going to kill me except by incompetence.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2017-04-14T16:53:49.000Z · LW(p) · GW(p)
It may generalize catastrophically to unseen situations (which is a key problem)
That probably summarises my whole objection ^_^
Replies from: paulfchristiano↑ comment by paulfchristiano · 2017-04-15T01:24:20.000Z · LW(p) · GW(p)
But this could happen even if you train your agent using the "correct" reward function. And conversely, if we take as given an AI that can robustly maximize a given reward function, then it seems like my schemes don't have this generalization problem.
So it seems like this isn't a problem with the reward function, it's just the general problem of doing robust/reliable ML. It seems like that can be cleanly factored out of the kind of reward engineering I'm discussing in the ALBA post. Does that seem right?
(It could certainly be the case that robust/reliable ML is the real meat of aligning model-free RL systems. Indeed, I think that's a more common view in the ML community. Or, it could be the case that any ML system will fail to generalize in some catastrophic way, in which case the remedy is to make less use of learning.)
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2017-04-17T14:49:51.000Z · LW(p) · GW(p)
It seems like that can be cleanly factored out of the kind of reward engineering I’m discussing in the ALBA post. Does that seem right?
That doesn't seem right to me. If there isn't a problem with the reward function, then ALBA seems unnecessarily complicated. Conversely, if there is a problem, we might be able to use something like ALBA to try and fix it (this is why I was more positive about it in practice).
comment by danieldewey · 2017-04-14T07:18:29.000Z · LW(p) · GW(p)
I'm not sure you've gotten quite ALBA right here, and I think that causes a problem for your objection. Relevant writeups: most recent and original ALBA.
As I understand it, ALBA proposes the following process:
- H trains A to choose actions that would get the best immediate feedback from H. A is benign (assuming that H could give not-catastrophic immediate feedback for all actions and that the learning process is robust). H defines the feedback, and so A doesn't make decisions that are more effective at anything than H is; A is just faster.
- A (and possibly H) is (are) used to define a slow process A+ that makes "better" decisions than A or H would. (Better is in quotes because we don't have a definition of better; the best anyone knows how to do right now is look at the amplification process and say "yep, that should do better.") Maybe H uses A as an assistant, maybe a copy of A breaks down a decision problem into parts and hands them off to other copies of A, maybe A makes decisions that guide a much larger cognitive process.
- The whole loop starts over with A+ used as H.
The claim is that step 2 produces a system that is able to give "better" feedback than the human could -- feedback that considers more consequences more accurately in more complex decision situations, that has spent more effort introspecting, etc. This should make it able to handle circumstances further and further outside human-ordinary, eventually scaling up to extraordinary circumstances. So, while you say that the best case to hope for is , it seems like ALBA is claiming to do more.
A second objection is that while you call each a "reward function", each system is only trained to take actions that maximize the very next reward they get (not sum of future rewards). This means that each system is only effective at anything insofar as the feedback function it's maximizing at each step considers the long-term consequences of each action. So, if , we don't have reason to think that the system will be competent at anything outside of the "normal circumstances + a few exceptions" you describe -- all of its planning power comes from , so we should expect it to be basically incompetent where is incompetent.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2017-04-14T09:00:03.000Z · LW(p) · GW(p)
that is able to give “better” feedback than the human could – feedback that considers more consequences more accurately in more complex decision situations, that has spent more effort introspecting
This is roughly how I would run ALBA in practice, and why I said it was better in practice than in theory. I'd be working with considerations I mentioned in this post and try and formalise how to extend utilities/rewards to new settings.
Replies from: danieldewey↑ comment by danieldewey · 2017-04-14T20:58:58.000Z · LW(p) · GW(p)
If I read Paul's post correctly, ALBA is supposed to do this in theory -- I don't understand the theory/practice distinction you're making.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2017-04-17T14:47:18.000Z · LW(p) · GW(p)
I disagree. I'm arguing that the concept of "aligned at a certain capacity" makes little sense, and this is key to ALBA in theory.
comment by danieldewey · 2017-04-14T07:12:46.000Z · LW(p) · GW(p)
FWIW, this also reminded me of some discussion in Paul's post on capability amplification, where Paul asks whether we can even define good behavior in some parts of capability-space, e.g.:
The next step would be to ask: can we sensibly define “good behavior” for policies in the inaccessible part H? I suspect this will help focus our attention on the most philosophically fraught aspects of value alignment.
I'm not sure if that's relevant to your point, but it seemed like you might be interested.