How should AIs update a prior over human preferences?
post by Stuart_Armstrong · 2020-05-15T13:14:30.805Z · LW · GW · 9 commentsContents
Joint priors Other areas None 9 comments
I've always emphasised the constructive aspect of figuring out human preferences [LW · GW], and the desired formal properties of preference learning processes.
A common response to these points is something along the line of "have the AI pick a prior over human preferences, and update it".
However, I've come to realise that a prior over human preferences is of little use. The real key is figuring out how to update it, and that contains almost the entirety of the problem.
I've shown that you cannot deduce preferences from observations or facts about the world - at least, without making some assumptions. These assumptions are needed to bridge the gap between observations/facts, and updates to preferences.
For example, imagine you are doing cooperative inverse reinforcement learning[1] and want to deduce the preferences of the human . CIRL assumes that knows the true reward function, and is generally rational or noisily rational (along with a few other scenarios).
So, this is the bridging law:
- knows their true reward function, and is noisily rational.
Given this, the AI has many options available to it, including the "drug the human with heroin [LW · GW]" approach. If is not well-defined in the bridging law, then "do brain surgery on the human [LW · GW]" also becomes valid.
And not only are those approaches valid; if the AI wants to maximise the reward function, according to how this is defined, then these are the optimal policies, as they result in the most return, given that bridging law.
Note that the following is not sufficient either:
- has a noisy impression of their true reward function, and is noisily rational.
Neither of the "noisy" statements are true, so if the AI uses this bridging law, then, for almost any prior, preference learning will come to a bad end.
Joint priors
What we really want is something like:
- has an imperfect impression of their true reward function, and is biased.
And yes, that bridging law is true. But it's also massively underdefined. We want to know how 's impression is imperfect, how they are biased, and also what counts as versus some brain-surgeried replacement of them.
So, given certain human actions, the AI can deduce human preferences. So this gives a joint prior over , the possible human reward functions and possible the human's policies[2]. Given that joint prior, then, yes, an AI can start deducing preferences from observations.
So instead of a "prior over preferences" and a "update bridging law", we need a joint object that does both.
But such a joint prior is essentially the same object as the assumptions needed to overcome the Occam's razor result.
Other areas
It seems to me that realisability [LW · GW] has a similar problem: if the AI has an imperfect model of how they're embedded in the world, then they will "learn" disastrously wrong things.
9 comments
Comments sorted by top scores.
comment by Pattern · 2020-05-15T19:17:44.136Z · LW(p) · GW(p)
However, I've come to realise that a prior over human preferences is of little use. The real key is figuring out how to update it, and that contains almost the entirety of the problem.
This is a great point. It summarizes something important really well.
If this is the main idea, then why the title? (As opposed to "How should AIs update a prior over human preferences?")
and also what counts as H versus some brain-surgeried replacement of them.
Also the heroin thing. (Which seems reversible at first glance, but that might not be entirely true (absent brain surgery, which has it's own irreversible element...).)
Once we know that, we get a joint prior p over R×ΠH, the human reward functions and the human's policy[^huid].
This sentence is a little ambiguous, as is the usage of "[^huid]". (Expected meaning: AI learns about what people want, and what people are doing, both in general and specifically. This phrasing evokes the question of what the AI will think of what people want(/are doing) as groups rather than individually.)
So instead of a "prior over preferences" and a "update bridging law", we need a joint object that does both.
The thesis makes sense. (And seems close to a good title.)
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2020-05-16T09:09:43.219Z · LW(p) · GW(p)
thanks! Changed title (and corrected badly formatted footnote)
comment by Rohin Shah (rohinmshah) · 2020-05-15T17:16:29.049Z · LW(p) · GW(p)
And not only are those approaches valid; if the AI wants to maximise the reward function, according to how this is defined, then these are the optimal policies, as they result in the most return, given that bridging law.
This is the part that I think needs more detail; it seems to me that this depends on what you mean by an "optimal policy".
Here's one possible algorithm. You have two separate systems:
- The first system (the "estimator") has a distribution over rewards that it updates using Bayes rule using the Boltzmann-rationality likelihood ratio. At the end of the episode, it computes and outputs the expected reward of the entire trajectory according to its final distribution over rewards.
- The second system (the "actor") acts in the world with an accurate model of reality, and maximizes the expected reward that comes out of the estimator at the end of the trajectory. (You could imagine training a neural net in the real world so that you have the "accurate model of reality" part.)
I agree that for such a system, the optimal policy of the actor is to rig the estimator, and to "intentionally" bias it towards easy-to-satisfy rewards like "the human loves heroin".
The part that confuses me is why we're having two separate systems with different objectives where one system is dumb and the other system is smart. CIRL, iterated amplification and debate all aim to create a single system that can do both estimation of human preferences and control in the same model.
(Maybe you could view iterated amplification as having two separate systems -- the "amplified model" and the "distilled model" -- where the amplified model serves the role of the estimator and the distilled model serves the role of the actor. This analogy seems pretty forced, but even if you buy it, it's noteworthy that the estimator is supposed to be smarter than the actor.)
So, here's a second algorithm. Imagine that you have a complex CIRL game that models the real world well but assumes that the human is Boltzmann-rational. You find an optimal policy for that game (i.e. not "in the real world / in the presence of misspecification"). Then you deploy that policy in the real world. Such a policy is going to "try" to learn preferences, learn incorrectly, and then act according to those incorrect learned preferences, but it is not going to "intentionally" rig the learning process.
It might think "hey, I should check whether the human likes heroin by giving them some", and then think "oh they really do love heroin, I should pump them full of it". It won't think "aha, if I give the human heroin, then they'll ask for more heroin, causing my Boltzmann-rationality estimator module to predict they like heroin, and then I can get easy points by giving humans heroin".
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2020-05-15T19:19:25.854Z · LW(p) · GW(p)
I agree that for such a system, the optimal policy of the actor is to rig the estimator, and to "intentionally" bias it towards easy-to-satisfy rewards like "the human loves heroin".
The part that confuses me is why we're having two separate systems with different objectives where one system is dumb and the other system is smart.
We don't need to have two separate systems. There's two meaning to your "bias it towards" phrase: the first one is the informal human one, where "the human loves heroin" is clearly a bias. The second is some formal definition of what is biasing and what isn't. And the system doesn't have that. The "estimator" doesn't "know" that "the human loves heroin" is a bias; instead, it sees this as a perfectly satisfactory way of accomplishing its goals, according to the bridging function it's been given. There is no conflict between estimator and actor.
Imagine that you have a complex CIRL game that models the real world well but assumes that the human is Boltzmann-rational. [...] Such a policy is going to "try" to learn preferences, learn incorrectly, and then act according to those incorrect learned preferences, but it is not going to "intentionally" rig the learning process.
The AI would not see any of these actions as "rigging", even if we would.
It might think "hey, I should check whether the human likes heroin by giving them some", and then think "oh they really do love heroin, I should pump them full of it".
It will do this if it can't already predict the effect of giving them heroin.
It won't think "aha, if I give the human heroin, then they'll ask for more heroin, causing my Boltzmann-rationality estimator module to predict they like heroin, and then I can get easy points by giving humans heroin".
If it can predict the effect of giving humans heroin, it will think something like that. It think: "if I give the humans heroin, they'll ask for more heroin; my Boltzmann-rationality estimator module confirms that this means they like heroin, so I can efficiently satisfy their preferences by giving humans heroin".
Replies from: Charlie Steiner, rohinmshah↑ comment by Charlie Steiner · 2020-05-16T05:00:28.667Z · LW(p) · GW(p)
I think Rohin's point is that the model of
"if I give the humans heroin, they'll ask for more heroin; my Boltzmann-rationality estimator module confirms that this means they like heroin, so I can efficiently satisfy their preferences by giving humans heroin".
is more IRL than CIRL. It doesn't necessarily assume that the human knows their own utility function and is trying to play a cooperative strategy with the AI that maximizes that same utility function. If I knew that what would really maximize utility is having that second hit of heroin, I'd try to indicate it to the AI I was cooperating with.
Problems with IRL look like "we modeled the human as an agent based on representative observations, and now we're going to try to maximize the modeled values, and that's bad." Problems with CIRL look like "we're trying to play this cooperative game with the human that involves modeling it as an agent playing the same game, and now we're going to try to take actions that have really high EV in the game, and that's bad."
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2020-05-18T15:20:43.843Z · LW(p) · GW(p)
Thanks! Responded here: https://www.lesswrong.com/posts/EYEkYX6vijL7zsKEt/reward-functions-and-updating-assumptions-can-hide-a [LW · GW]
↑ comment by Rohin Shah (rohinmshah) · 2020-05-15T22:36:35.524Z · LW(p) · GW(p)
The key point is not that the AI knows what is or isn't "rigging", or that the AI "knows what a bias is". The key point is that in a CIRL game, by construction there is a true (unknown) reward function, and thus an optimal policy must be viewable as being Bayesian about the reward function, and in particular its actions must be consistent with conservation of expected evidence about the reward function; anything which "rigs" the "learning process" does not satisfy this property and so can't be optimal.
You might reasonably ask where the magic happens. The CIRL game that you choose would have to commit to some connection between rewards and behavior. It could be that in one episode the human wants heroin (but doesn't know it) and in another episode the human doesn't want heroin (this depends on the prior over rewards). However, it could never be the case that in a single episode (where the reward must be fixed) the human doesn't want heroin, and then later in the same episode the human does want heroin. Perhaps in the real world this can happen; that would make this policy suboptimal in the real world. (What it does then is unclear since it depends on how the policy generalizes out of distribution.)
If this doesn't clarify it, I'll probably table this discussion until publishing an upcoming paper on CIRL games (where it will probably be renamed to assistance games).
EDIT: Perhaps another way to put this: I agree that if you train an AI system to act such that it maximizes the expected reward under the posterior inferred by a fixed update rule looking at the AI system's actions and resulting states, the AI will tend to gain reward by choosing actions which when plugged into the update rule lead to a posterior that is "easy to maximize". This seems like training the controller but not training the estimator, and so the controller learns information about the world that allows it to "trick" the estimator into updating in a particular direction (something that would be disallowed by the rules of probability applied to a unified Bayesian agent, and is only possible here because either a) the estimator is uncalibrated or b) the controller learns information that the estimator doesn't know).
Instead, you should train an AI system such that it maximizes the expected reward it gets under the prior; this is what CIRL / assistance games do. This is kinda sorta like training both the "estimator" and the "controller" simultaneously, and so the controller can't gain any information that the estimator doesn't have (at least at optimality).
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2020-05-18T15:21:00.362Z · LW(p) · GW(p)
Thanks! Responded here: https://www.lesswrong.com/posts/EYEkYX6vijL7zsKEt/reward-functions-and-updating-assumptions-can-hide-a [LW · GW]
comment by Anon User (anon-user) · 2020-05-16T04:25:14.644Z · LW(p) · GW(p)
I wonder whether you may be conflating two somewhat distinct (perhaps even orthogonal) challenges not modeled in the CIDR model:
- Human actions may be reflecting human values very imperfectly (or worse - can be an imperfect reflection of inconsistent conflicting values).
- Some actions by AI may damage the human, at which point the human actions may stop being meaningfully correlated with the value function. This is a problem that would have still be relevant if we somehow found an ideal human capable of acting on their values in a perfectly rational manner.
The first challenge "only" requires the AI to be better at deducing the "real" values. ("Only" is in quotes because it's obviously still a major unsolved problem, and "real" is in quotes because it's not a given what that actually means.). The second challenge is about AI needing to be constrained in its actions even before it knows the value function - but there is at least a whole field of Safe RL on how do do this for much simpler tasks, like learning to move a robotic arm without breaking anything in the process.