Transforming myopic optimization to ordinary optimization - Do we want to seek convergence for myopic optimization problems?
post by tailcalled · 2021-12-11T20:38:46.604Z · LW · GW · 1 commentsContents
A common form for myopic optimization, TLDR A common form for myopic optimization, long version Characterizing the convergent points Is this useful? None 1 comment
While reading some papers, I found a way to take a myopic [? · GW] optimization problem , and construct a non-myopic optimization problem whose optima match the fixed points of the original problem . This seems potentially of interest for myopia research; it could provide a way to think about what optimization of might give you, a tool for analyzing the trajectories of myopic optimization, and maybe also a better-behaved alternative to myopic optimization.
A common form for myopic optimization, TLDR
In order to explain the transformation, I first need to introduce a common form to myopic optimization problems. The next section goes into detail with applying the common form, but I suspect many people here are sufficiently comfortable with myopic optimization problems that they might be able to do with the shorter explanation.
In ordinary optimization, we might minimize a loss function . That is, the optimized model would be given by the equation .
This is non-myopic; every influence of m on the result of gets optimized. To introduce a myopic equivalent, we need to distinguish between two influences of m; the parts that we want to optimize, and the parts that we want to be myopic about.
We can do that by introducing two separate parameters, , with the first parameter being the parameter we optimize, and the second parameter being the parameter we are myopic about. Given a starting model , we can then myopically update this model, yielding .
This process can be iterated, but it has many poor properties, e.g. it is not guaranteed to converge, and it is unclear what its results will be. This post is about providing an alternative to that process.
A common form for myopic optimization, long version
Let's consider an example of myopic optimization.
Suppose you have some predictive model m, which gets trained according to some loss function , requiring a distribution of data points . Essentially, it's trained to minimize .
If you deploy this model in the real world, then you would hope that it has some sort of real-world effect (if nothing else, then bringing you more ad money); otherwise, what's the point? But if it has a real-world effect, then in particular it might also have a real-world effect on the distribution that it encounters.
This might mean that the distribution it was trained on is no longer good for getting it to optimal performance; we should train it on instead. But since happened as a result of the model, it is really a function of the model. So when we retrain on the data we got after deploying , that yields a new model that might induce yet a new training dataset .
If we consider iterating this endlessly, what will happen? Will it be like optimizing the following expression?
If it is like optimizing this expression, then that seems somewhat concerning. Because this expression includes the dependency of the dataset on the model, optimizing this expression would, as a side-effect, optimize the dataset to become more predictable. How one would go about making a the real world more predictable is a bit unclear, but considering all of the AI safety concerns for even seemingly innocuous loss functions, this is not something we want to get involved with.
So is this what would get optimized? No; this is where the myopia comes in. When optimizing the model, we don't "look ahead" and think about how the dataset will change in response to the new model. In machine learning terms, we don't propagate the gradients through .
Rather, it makes more sense to specify the optimization problem using a bivariate loss function, taking as parameters both the model that is being optimized, and the model that is currently deployed:
In each iteration of optimization, we then change the deployed model to be optimized for the first parameter. This can be expressed as the equation:
This optimization method has a number of bad properties. For instance, it is not guaranteed to converge. With ordinary non-myopic optimization, always decreases as you optimize it. As long as is bounded, this means that you cannot keep coming up with new models that decrease it, and so you must eventually reach the best model.^1
However, in myopic optimization, while is always guaranteed to be greater than , it might be that is less than or equal to . This can happen in "rock-paper-scissors"-like situations, where the optimal solution always changes away from the deployed solution. Hence it might be nice to have an alternative approach.
Characterizing the convergent points
So, the general form of myopic optimization can be considered to be:
And this is a terrible equation.^2 So we want to improve it.
To make it easier to reason about, let's make an assumption: It has converged to a fixed point. More precisely, we have a value ; that is, there is no way to change the deployed model to perform better on the training data that comes from the deployed model.
An immediate implication of this is that , and so . But notice that since , the left-hand-side in the previous equation is always nonnegative. So this means that the equation holds only when the function is minimized.
This function is in some ways much better behaved. It tells us what the fixed point is, rather than just telling us a training procedure. However, in other ways it is less well-behaved:
- The trajectory followed when doing myopic optimization on won't actually be the same as the trajectory followed is optimizing by gradient descent. In fact, often you might not be able to directly optimize on at all; for instance, the part that you are myopic about is often an emergent property of the real world, rather than a clean math expression.
- contains an inner expression, which is hard to evaluate, and whose general results might be hard to predict.
- does non-myopic optimization on , in the term, and simply hopes that this gets cancelled out by the term. This is not necessarily the case; if the myopic optimization fails to converge, then optimization on might converge to something that has very different consequences from what would happen if we directly optimized .
These limitations together seem like a good reason to not just apply this transformation for no reason. That said, it seems like there is often a good reason, at least when it comes to assessing the safety of : I would be concerned about a myopic optimization problem whose transformed version is unsafe; while we would not be guaranteed to reach the unsafe region, it seems like the optimization of increases the likelihood of entering the unsafe optimum of a lot.
Is this useful?
I got the concept from this paper, where a basically-identical concept from game theory called "duality gap" is investigated for machine learning. In order to evaluate the usefulness about this transformation for myopic optimization in AI alignment and agency foundations, we need to know more about its properties. I have tried searching a bit around in the game theory literature, but I haven't immediately seen anything that is important for our purposes. Though I also haven't had time to search much, due to my day job.
Fundamentally, it seems to me that there are some potential use-cases for it:
- Evaluate the likely convergence properties of a myopic optimization; if myopic optimization on tends to decrease , then it seems likely to converge, otherwise it seems likely to diverge.
- Evaluate the possible properties of myopic optimization - it seems reasonable to think about optima for to get plausible answers for properties that myopic optimization on could give (though how reasonable depends on things like whether the optimization will converge).
- Replace myopic optimization with non-myopic optimization; may possibly be nicer than . At least in some cases.
When I consider applying the transformation to some obscure myopic optimization problems that I have thought about, then it seems to have some nice properties, but also some potentially concerning ones. I will likely make a followup post where I explain them. But the myopic optimization problems that I am considering are kind of convoluted, so they don't really fit here.
I thought it might be a good idea to leave this open to LessWrong; are there some myopic optimization problems that you have been thinking about? And if so, what effects would this de-myopization transformation have on them?
(Should we call it "duality gap transformation", considering its origins? Or something else?)
1. This is actually a lie. This is not sufficient to prove convergence, and this setup also doesn't resemble at all how optimization gets done in practice. In reality convergence is a messy issue even for non-myopic optimization. However, it's a useful simplification.
2. It is terrible partly because in the limiting form, it is a fixpoint equation, which is hard to reason about. There are variants that are even more terrible; machine learning people might usually phrase it as . This uses a mysterious function with the property that , but - which of course cannot be a real function.
Thanks to Justis Mills for proofreading.
1 comments
Comments sorted by top scores.