Posts

Comments

Comment by Matteo Capucci (matteo-capucci) on What’s up with LLMs representing XORs of arbitrary features? · 2024-01-04T19:48:09.358Z · LW · GW

Here's what my spidey sense is telling me: model is trying to fit as many representation as possible (IIRC this is known in mech interp) and by mere pushing apart features $a$, $b$ in a large dimensional space you end up making $a \oplus b$ linearly separable. That is, there might be a combinatorial phenomenon underlying this, which feels counterintuitive because of the large dimensions involved.

Comment by Matteo Capucci (matteo-capucci) on Towards Measures of Optimisation · 2023-05-27T08:49:38.997Z · LW · GW

Uhm two comments/questions on this.

  1. Why do you need to decide between those probability distributions? You only need to get one action (or distribution thereof) out. You can do it without deciding, eg by taking their average and sampling. On the other hand vNM tells us utility is being assigned if your choice satisfies some conditions, but vNM = agency is a complicated position to hold.

  2. We know that at some level every physical system is doing gradient descent or a variational version thereof. So depending on the scale you model a system, you would assign different degrees of agency?

By the way gradient descent is a form of local utility minimization, and by tweaking the meaning of 'local' one can get many other things (evolution, Bayesian inference, RL, 'games', etc).