Posts

Comments

Comment by robm on There should be more AI safety orgs · 2023-10-06T22:06:58.562Z · LW · GW

I have similar feelings, there's not a clear path for someone in an adjacent field. I chose my current role largely based on the expected QALYs, and I'd gladly move into AI Safety now for the same reason.

This post gives the impression that finding talent is not the current constraint, but I'm confused about why the listed salaries are so high for some of these roles if the pool is so large.

I've submitted applications to a few of these orgs, with cover letters that basically say "I'm here and willing if you need my skills". One frustration is recognizing Alignment as our greatest challenge, and not having a path to go work on it. Another is that the current labs look somewhat homogeneous and a lot like academia, which is not how I'd optimize for speed.

Comment by robm on We don’t trade with ants · 2023-01-25T21:58:49.542Z · LW · GW

I once came home to finds ants carrying rainbow sprinkles across my apartment wall (left out from cake making). I thought it was entertaining once I understood what I was seeing.

Comment by robm on Alexander and Yudkowsky on AGI goals · 2023-01-25T00:05:47.486Z · LW · GW

There's a difference between "what would you do to blend apples" and "what would you do to unbox an AGI". It's not clear to me if it is just a difference of degree, or something deeper.