Posts
Comments
Not Buck but I think it does unless of course they Saw Something and decided that safety efforts weren't going to work. The essay seems to hinge on safety people being able to make models safer, which sounds plausible but I'm sure they already knew that. Given their insider information and conclusions about their ability to make a positive impact, then it seems less plausible that their safety efforts would succeed. Maybe whether or not someone has already quit is an indication of how impactful their safety work is. It also varies by lab, with OpenAI having many safety conscious quitters but other labs having much fewer (I want to say none, but maybe I just haven't heard of any).
The other thing to think about is whether or not people who quit and claimed it was due to safety reasons were being honest about that. I'd like to believe that they were, but all companies have culture/performance expectations that their employees might not want to meet and quitting for safety reasons sounds better than quitting over performance issues.