Have we seen any "ReLU instead of sigmoid-type improvements" recently

post by KvmanThinking (avery-liu) · 2024-11-23T03:51:52.984Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    7 Marcus Williams
None
1 comment

I read this and, it said:

there are huge low hanging fruit that any AI or random person designing AI in their garage can find by just grasping in the dark a bit, to get huge improvements at accelerating speeds.

have we found anything like this? at all? have we seen any "weird tricks" discovered that make AI way more powerful for no reason?

Answers

answer by Marcus Williams · 2024-11-23T08:46:35.360Z · LW(p) · GW(p)

I'm not sure if these would be classed as "weird tricks" and I definitely think these have reasons for working, but some recent architecture changes which one might not expect to work a priori include:

  • SwiGLU: Combines a gating mechanism and an activation function with learnable parameters.
  • Grouped Query Attention: Uses fewer Key and Value heads than Query heads.
  • RMSNorm: Layernorm but without the translation.
  • Rotary Position Embeddings: Rotates token embeddings to give them positional information.
  • Quantization: Fewer bit weights without much drop in performance.
  • Flash Attention: More efficient attention computation through better memory management.
  • Various sparse attention schemes

1 comment

Comments sorted by top scores.

comment by ZY (AliceZ) · 2024-11-23T06:25:11.457Z · LW(p) · GW(p)

On the side - could you elaborate why you think "relu better than sigmoid" is a "weird trick", if that is implied by this question?

The reason that I thought to be commonly agreed is that it helps with the vanishing gradient problem (this could be shown from the graphs).