Posts
Comments
Comment by
Hopenope (baha-z) on
jacquesthibs's Shortform ·
2024-12-21T16:45:03.833Z ·
LW ·
GW
It depends on your world model. If your timelines are really short, then an AGI through automated interpretability research would still be a much safer path compared to other scaling-dependent alternatives.