Have we seen any "ReLU instead of sigmoid-type improvements" recently
post by KvmanThinking (avery-liu) · 2024-11-23T03:51:52.984Z · LW · GW · 1 commentThis is a question post.
Contents
Answers 9 Marcus Williams None 1 comment
I read this and, it said:
there are huge low hanging fruit that any AI or random person designing AI in their garage can find by just grasping in the dark a bit, to get huge improvements at accelerating speeds.
have we found anything like this? at all? have we seen any "weird tricks" discovered that make AI way more powerful for no reason?
Answers
I'm not sure if these would be classed as "weird tricks" and I definitely think these have reasons for working, but some recent architecture changes which one might not expect to work a priori include:
- SwiGLU: Combines a gating mechanism and an activation function with learnable parameters.
- Grouped Query Attention: Uses fewer Key and Value heads than Query heads.
- RMSNorm: Layernorm but without the translation.
- Rotary Position Embeddings: Rotates token embeddings to give them positional information.
- Quantization: Fewer bit weights without much drop in performance.
- Flash Attention: More efficient attention computation through better memory management.
- Various sparse attention schemes
↑ comment by KvmanThinking (avery-liu) · 2024-11-23T14:15:05.816Z · LW(p) · GW(p)
How were these discovered? Slow, deliberate thinking, or someone trying some random thing to see what it does and suddenly the AI is a zillion times smarter?
Replies from: Marcus Williams↑ comment by Marcus Williams · 2024-11-23T15:58:47.612Z · LW(p) · GW(p)
"We offer no explanation as to why these architectures seem to work; we attribute their success, as all else, to divine benevolence." -SwiGLU paper.
I think it varies, a few of these are trying "random" things, but mostly they are educated guesses which are then validated empirically. Often there is a spefic problem we want to solve i.e. exploding gradients or O(n^2) attention and then authors try things which may or may not solve/mitigate the problem.
1 comment
Comments sorted by top scores.
comment by ZY (AliceZ) · 2024-11-23T06:25:11.457Z · LW(p) · GW(p)
On the side - could you elaborate why you think "relu better than sigmoid" is a "weird trick", if that is implied by this question?
The reason that I thought to be commonly agreed is that it helps with the vanishing gradient problem (this could be shown from the graphs).