Can we infer the search space of a local optimiser?

post by Lucius Bushnaq (Lblack) · 2025-02-03T10:17:01.661Z · LW · GW · No comments

This is a question post.

Contents

  Setup
  Example of what this could look like
  Summary
None
  Answers
    4 Alexander Gietelink Oldenziel
None
No comments

Suppose we have a system that we suspect is some form of local optimiser. Maybe it's gradient descent training a neural network. Maybe it’s a neural network doing in-context learning. Maybe it's a mind, because we guess that the operation of minds in general is to an extent well-described as local movement in some 'abstraction space' a lot of the time. Maybe it’s something else.

Is there, on paper, a general way:

  1. To check whether our guess that the system is a local optimiser is correct?
  2. To find the space in which the optimiser is taking local steps?

Setup

Concretely, say a system has a state vector , which we can see.  evolves over time steps , so let’s call it . Our guess is that the system evolves  through some local optimisation method. But it’s not really local in -space. It’s local in some other space we don’t know about. Let’s call the state vector in that hidden space we don’t know about , where  is some space.  seems guaranteed to be a topological space, else the idea of a local search in it wouldn't make much sense. Maybe in a lot of cases we can also assume that it is a metric space? Not sure about that one, but a lot of the local search algorithms I can think of seem to have some notion of distance between points in the space.

So, overall the system maps  as , where the transformations  and  can be complicated and nonlinear.

 is assumed to be a step in some local optimisation method. That is to say, there is some evaluation function  that the system is trying to minimise, and it tries to do this minimisation via something like a genetic algorithm, gradient descent, Adam, or something else that makes use of local information to take one step in the space  per time step . The evaluation function  should have the kind of  loss landscape a local optimiser could reasonably navigate. No parity learning problems [LW · GW] or anything like that. It also shouldn’t be completely trivial to find the minimum of . I.e. if the optimiser is using Newton’s method shouldn’t just be quadratic everywhere, else there isn’t much of a search trajectory for us to look at because the system always finds the minimum in one step.

Now, what I’m wondering about is this: Is there some characteristic attribute of the trajectories local optimisations take in their search spaces which we can use to figure out the map  with reasonable confidence, provided we have enough states  from different searches to look at?

Example of what this could look like

I'm not claiming the following would actually work, I'm just trying to point to the sort of thing my brain is thinking about by giving a concrete example.

In a lot of cases I can think of, neighbouring points in the time series of a local search will tend to be close to each other in terms of Euclidean distance in the search space. So, we could try to find a map  such that   will tend to be small. We could do this by training an autoencoder with a deep and expressive architecture on a loss of 

  1. Reconstructing the original system outputs. So, learn some map .
  2. Minimising some term like , with , where the standard deviation  and expectation  are taken over the batch.

And then maybe the map  we can now explicitly see in the autoencoder would tend to roughly match the map  we're looking for, up to trivial transformations like affine transforms. And if the autoencoder training can't converge with good loss, then maybe that means the original system wasn't a local optimiser, or at least not a local optimiser taking small steps in a Euclidean space.

Summary

  1. Should we expect local search spaces to often be describable as metric spaces, for the kind of local searches we might often see in real life? Local searches run by in-context learning algorithms or other minds, for example.
  2. If a local optimiser maps , with  being the states in the local search space, can we infer the map  from seeing enough states ,[1] by leveraging our knowledge that the time series  has to look like the trajectory of a local search?
  1. ^

    Up to affine transformations and such of course.

Answers

answer by Alexander Gietelink Oldenziel · 2025-02-03T13:59:46.634Z · LW(p) · GW(p)

I'm actually curious about a related problem. 

One of the big surprises of the deep learning revolution has been the universality of gradient descent optimization. 

How large is the class of optimization problems that we can transform into a gradient descent problem of some kind? My suspicision is that it's a very large class; perhaps there is even a general way to transform any problem into a gradient descent optimization problem? 

The natural thing that comes to mind is to consider gradient descent of Langrangian energy functionals in (optimal) control theory. 

No comments

Comments sorted by top scores.