Posts
Comments
Comment by
Hdot (harpal) on
Adam Optimizer Causes Privileged Basis in Transformer LM Residual Stream ·
2024-09-10T09:35:52.660Z ·
LW ·
GW
Interesting find! Is this resolved by just using layer normalisation to normalise the activations of along channels? That way we could keep our adaptive learning rates but smoothen the distribution of activations and weights.