Getting started at distillations: can critique mine?
post by Joyee Chen (joyee-chen) · 2024-02-20T00:49:28.429Z · LW · GW · No commentsThis is a question post.
Contents
No comments
I'm Joyee, an aspiring AI alignment researcher (undergrad) at Berkeley, and I'm just getting started with distillations! Since I've heard much of iteration as the way to improvement, and "colliding one's mental model with reality as frequently as possible", I'd like to ask: can someone critique my first one, on the paper "Scaling Laws for Transfer" through the lens of alignment automation efforts?
https://drive.google.com/file/d/1vCV1XWxCxa5rvVIvSEfGfbJ26rtqyaQm/view?usp=sharing
At least as a beginner, my philosophy of distillation is roughly similar to https://www.lesswrong.com/posts/XudyT6rotaCEe4bsp/the-benefits-of-distillation-in-research [LW · GW] especially when it comes to "lenses".
I fly the flag of Crocker's rules.
Answers
No comments
Comments sorted by top scores.