What are some non-purely-sampling ways to do deep RL?
post by evhub · 2019-12-05T00:09:54.665Z · LW · GW · 6 commentsThis is a question post.
Contents
Answers 9 ryan_b None 6 comments
Conventionally in machine learning, if you want to learn to minimize some loss or maximize some expected return, you do so by sampling a bunch of losses/rewards and training on those. Since the model only ever sees the loss or reward function through the lens of those specific samples, this basic approach introduces a proxy alignment problem.
For example, suppose you train an RL agent to maximize its future discounted return according to some reward function . Furthermore, suppose there exists some other reward function such that and give equivalent samples on the training distribution, but diverge elsewhere. If you just train your agent via evaluating on a bunch of samples, however, then even if your model is in some sense trying to do the right thing, it has no possible way of knowing whether or is the right generalization.
In many cases, however, we know exactly what is—we have explicit code for it and everything (or at least some sort of natural language description of it)—but we still only make use of via sampling/evaluation. Of course, in many RL settings, you actually do only know how to evaluate , not inspect it in any other way. However, I think a system that only works in settings where you have more access to the reward function than that can still do quite a lot—even if you explicitly know an environment’s reward function, it can still be quite difficult to figure out the optimal policy (think Go, for example) such that having an ML system which can figure it out for you is quite powerful.
So, here’s my question: at least for environments in which you have a known reward function, what are some ways of making use of that information in training a deep learning model other than evaluating that reward function on a bunch of samples? I’m also interested in ways of doing this in non-RL settings, though I still mostly only want to focus on deep learning approaches—there are certainly ways of doing this in more classical machine learning, but I’m less interested in those.
Some possibilities that I’ve considered so far:
- Put a differentiable copy of the reward function inside the network during training such that the network is able to arbitrarily query the reward function however it wants (credit to Nevan Wichers for this idea). For a smooth reward function you could also give your model the ability to explicitly query gradients as well.
- Express your reward function as a differentiable function with tunable parameters, put a bunch of copies in your network, and then train without freezing those tunable parameters (or maybe freeze for the first steps then unfreeze). This specific implementation seems pretty janky, but the basic idea here is to find a way to bias the network towards learning an algorithm that includes an objective that’s similar to the actual reward function.
- Using transparency/interpretability tools, figure out how the model is internally representing the reward function and then enforce that it do so in a way that maps correctly onto the actual reward function.
- Use a language model to make sense of a natural language description of your reward function in a way that allows it to act as an RL agent. For example, you could fine-tune a language model on the task of mapping natural-language descriptions of reward functions into optimal actions under that reward.
- Same as the language model idea, but instead of using natural language, use some sort of mathematical/logical/programming language instead. For example, you might be able to do something like this if you had a powerful deep-learning-based theorem prover.
- (EDIT) Here's another example: do MuZero-style planning where you learn all the dynamics necessary to do model-based planning in a model-free way except for the reward function and then include the reward function explicitly.
I’m sure there are other possibilities that I haven’t thought about yet, however—possibly including papers on this in the literature that I’m not familiar with. Any ideas?
Answers
This doesn't strike directly at the sampling question, but it is related to several of your ideas about incorporating the differentiable function: Neural Ordinary Differential Equations.
This is being exploited most heavily in the Julia community. The broader pitch is that they have formalized the relationship between differential equations and neural networks. This allows things like:
- applying differential equation tricks to computing the outputs of neural networks
- using neural networks to solve pieces of differential equations
- using differential equations to specify the weighting of information
The last one is the most intriguing to me, mostly because it solves the problem of machine learning models having to start from scratch even in environments where information about the environment's structure is known. For example, you can provide it with Maxwell's Equations and then it "knows" electromagnetism.
There is a blog post about the paper and using it with the DifferentialEquations.jl and Flux.jl libraries. There is also a good talk by Christopher Rackauckas about the approach.
It is mostly about using ML in the physical sciences, which seems to be going by the name Scientific ML now.
↑ comment by ryan_b · 2019-12-06T16:54:29.260Z · LW(p) · GW(p)
I don't know what the procedure for this is, but it occurs to me that if we can specify information about an environment via differential equations inside the neural network, then we can also compare this network's output to one that doesn't have the same information.
In the name of learning more about how to interpret the models, we could try something like:
1) Construct an artificial environment which we can completely specify via a set of differential equations.
2) Run a neural network to learn that environment with every combination of those differential equations.
3) Compare all of these to several control cases of not providing any differential equations.
It seems like how the control case differs from each of the cases-with-structural-information should give us some information about how the network learns the environmental structure.
6 comments
Comments sorted by top scores.
comment by Matthew Barnett (matthew-barnett) · 2019-12-08T22:57:54.307Z · LW(p) · GW(p)
For the Alignment Newsletter:
Summary: A deep reinforcement learning agent trained by reward samples alone may predictably lead to a proxy alignment issue [LW · GW]: the learner could fail to develop a full understanding of what behavior it is being rewarded for, and thus behave unacceptably when it is taken off its training distribution. Since we often use explicit specifications to define our reward functions, Evan Hubinger asks how we can incorporate this information into our deep learning models so that they remain aligned off the training distribution. He names several possibilities for doing so, such as giving the deep learning model access to a differentiable copy of the reward function during training, and fine-tuning a language model so that it can map natural language descriptions of a reward function into optimal actions.
Opinion: I'm unsure, though leaning skeptical, whether incorporating a copy of the reward function into a deep learning model would help it learn. My guess is that if someone did that with a current model it would make the model harder to train, rather than making anything easier. I will be excited if someone can demonstrate at least one feasible approach to addressing proxy alignment that does more than sample the reward function.
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2019-12-17T00:43:42.992Z · LW(p) · GW(p)
My opinion (also going in the newsletter):
I'm skeptical of this approach. Mostly this is because I'm generally skeptical that an intelligent agent will consist of a separate "planning" part and "reward" part. However, if that were true, then I'd think that this approach could plausibly give us some additional alignment, but can't solve the entire problem of inner alignment. Specifically, the reward function encodes a _huge_ amount of information: it specifies the optimal behavior in all possible situations you could be in. The "intelligent" part of the net is only ever going to get a subset of this information from the reward function, and so its plans can never be perfectly optimized for that reward function, but instead could be compatible with any reward function that would provide the same information on the "queries" that the intelligent part has produced.
For a slightly-more-concrete example, for any "normal" utility function U, there is a utility function U' that is "like U, but also the best outcomes are ones in which you hack the memory so that the 'reward' variable is set to infinity". To me, wireheading is possible because the "intelligent" part doesn't get enough information about U to distinguish U from U', and so its plans could very well be optimized for U' instead of U.
This is rather abstract / complex so I'd be interested in suggestions for how to make it more understandable.
comment by gwern · 2019-12-05T00:33:21.465Z · LW(p) · GW(p)
You mean stuff like model-predictive control and planning? You can use backprop to do gradient ascent over a sequence of actions if you have a differentiable environment and/or reward model. This also has a lot of application to image CNNs: reversing GANs to encode an image for editing, optimizing to maximize a particular class (like maximally 'dog' or 'NSFW' images) etc. I cover some of the uses and history in https://www.gwern.net/Faces#reversing-stylegan-to-control-modify-images
My most recent suggestion in this vein was about OA/Christiano's preference learning, using gradient ascent directly on trajectories/strings, which avoids explicit sampling and rating in an environment.
Replies from: evhub↑ comment by evhub · 2019-12-05T01:19:24.322Z · LW(p) · GW(p)
Hmmm... not sure if this is exactly what I want. I'd prefer not to assume too much about the environment dynamics. Not sure if this is related to what you're talking about, but one possibility, maybe, for a way in which you could do model-based planning with an explicit reward function but without assuming much about the environment dynamics could be to learn all the dynamics necessary to do model-based planning in a model-free way (like MuZero) except for the reward function and then include the reward function explicitly.
comment by Donald Hobson (donald-hobson) · 2019-12-05T11:40:25.390Z · LW(p) · GW(p)
The r vs r' problem can be reduced if you can find a way to sample points of high uncertainty.
Replies from: evhub↑ comment by evhub · 2019-12-05T19:28:09.040Z · LW(p) · GW(p)
Yep—that's the adversarial training approach to this problem. The problem is that you might not be able to sample all the relevant highly uncertain points (e.g. because you don't know exactly what the deployment distribution will be), which means you have to do some sort of relaxed adversarial training [AF · GW] instead, which introduces its own issues.