Laplace Approximation

post by johnswentworth · 2019-07-18T15:23:28.140Z · LW · GW · 3 comments

Contents

  Laplace Approximation
  Laplace Complexity Penalty
None
3 comments

The last couple [? · GW] posts [LW · GW] compared some specific models for 20000 rolls of a die. This post will step back, and talk about more general theory for Bayesian model comparison.

The main problem is to calculate for some model. The model will typically give the probability of observed data (e.g. die rolls) based on some unobserved parameter values (e.g. the 's in the last two posts), along with a prior distribution over . We then need to compute

which will be a hairy high-dimensional integral.

Some special model structures allow us to simplify the problem, typically by factoring the integral into a product of one-dimensional integrals. But in general, we need some method for approximating these integrals.

The two most common approximation methods used in practice are Laplace approximation around the maximum-likelihood point, and MCMC (see e.g. here for application of MCMC to Bayes factors). We'll mainly talk about Laplace approximation here - in practice MCMC mostly works well in the same cases, assuming the unobserved parameters are continuous.

Laplace Approximation

Here's the idea of Laplace approximation. First, posterior distributions tend to be very pointy. This is mainly because independent probabilities multiply, so probabilities tend to scale exponentially with the number of data points. Think of the probabilities we calculated in the last two posts, with values like or - that's the typical case. If we're integrating over a function with values like that, we can basically just pay attention to the region around the highest value - other regions will have exponentially small weight.

Laplace' trick is to use a second-order approximation within that high-valued region. Specifically, since probabilities naturally live on a log scale, we'll take a second order-approximation of the log likelihood around its maximum point. Thus:

If we assume that the prior is uniform (i.e. ), then this looks like a normal distribution on with mean and variance given by the inverse Hessian matrix of the log-likelihood. (It turns out that, even for non-uniform , we can just transform so that the prior looks uniform near , and transform it back when we're done.) The result:

Let's walk through each of those pieces:

A bit more detail on that last piece: intuitively, each eigenvalue of the Fisher information matrix tells us the approximate width of the peak in a particular direction. Since the matrix is the inverse variance (in one dimension ) of our approximate normal distribution, and "width" of the peak of a normal distribution corresponds to the standard deviation , we use an inverse square root (i.e. the power of ) to extract a width from each eigenvalue. Then, to find how much volume the peak covers, we multiply together the widths along each direction - thus the determinant, which is the product of eigenvalues.

Why do we need eigenvalues? The diagram above shows the general idea: for the function shown, the two arrows would be eigenvectors of the Hessian at the peak. Under a second-order approximation, these are principal axes of the function's level sets (the ellipses in the diagram). They are the natural directions along which to measure the width. The eigenvalue associated with each eigenvector tells us the width, and then taking their product (via the determinant) gives a volume. In the picture above, the determinant would be proportional to the volume of any of the ellipses.

Altogether, then, the Laplace approximation takes the height of the peak (i.e. ) and multiplies by the volume of -space which the peak occupies, based on a second-order approximation of the likelihood around its peak.

Laplace Complexity Penalty

The Laplace approximation contains our first example of an explicit complexity penalty.

The idea of a complexity penalty is that we first find the maximum log likelihood , maybe add a term for our -prior , and that's the "score" of our model. But more general models, with more free parameters, will always score higher, leading to overfit. To counterbalance that, we calculate some numerical penalty which is larger for more complex models (i.e. those with more free parameters) and subtract that penalty from the raw score.

In the case of Laplace approximation, a natural complexity penalty drops out as soon as we take the log of the approximation formula:

The last two terms are the complexity penalty. As we saw above, they give the (log) volume of the likelihood peak in -space. The wider the peak, the larger the chunk of -space which actually predicts the observed data.

There are two main problems with this complexity penalty:

Historically, a third issue was the math/coding work involved in calculating a Hessian, but modern backprop tools like Tensorflow or autograd make that pretty easy; I expect in the next few years we'll see a lot more people using a Laplace-based complexity penalty directly. The runtime remains a serious problem for large-scale models, however, and that problem is unlikely to be solved any time soon: a linear-time method for computing the Hessian log determinant would yield an matrix multiplication algorithm.


3 comments

Comments sorted by top scores.

comment by Bucky · 2019-08-21T10:51:28.626Z · LW(p) · GW(p)

For an introduction to MCMC aimed at a similar level target audience, I found this explanation helpful.

comment by Bucky · 2019-07-21T20:46:59.587Z · LW(p) · GW(p)

Thanks for this sequence, I've read each post 3 or 4 times to try to properly get it.

Am I right in thinking that in order to replace we not only require a uniform prior but also that span unit volume?


Replies from: johnswentworth
comment by johnswentworth · 2019-07-21T21:26:32.346Z · LW(p) · GW(p)

Correct. In general, is the probability density of , so if it's uniform on a unit volume then .

The main advantage of this notation is that it's parameterization-independent. For example: in a coin-flipping example, we could have a uniform prior over the frequency of heads , so . But then, we could re-write that frequency in terms of the odds , so we'd get and

So the probability density is equivalent to the density . (That first step, , is because these two variables contain exactly the same information in two different forms - that's the parameterization independence. After that, it's math: substitute and differentiate.)

(Notice that the uniform prior on is not uniform over . This is one of the main reasons why "use a uniform prior" is not a good general-purpose rule for choosing priors: it depends on what parameters we choose. Cartesian and polar coordinate give different "uniform" priors.)

The moral of the story is that, when dealing with continuous probability densities, the fundamental "thing" is not the density function but the density times the differential , which we call . This is important mainly when changing coordinates: if we have some coordinate change , then , but .

If anybody wants an exercise with this: try transforming to a different coordinate system. Apply Laplace' approximation in both systems, and confirm that they yield the same answer. (This should mainly involve applying the chain rule twice to the Hessian; if you get stuck, remember that is a maximum point and consider what that implies.)