Posts
Comments
Comment by
novalinium on
Carl Sagan, nuking the moon, and not nuking the moon ·
2024-04-13T16:35:55.115Z ·
LW ·
GW
The little triangle guy in the back contamination figure reminds me of Haloarcula japonicus
Gram-negative, motile by flagella, triangular disk, ca. 2-5 µm x 0.2-0.5µm.
Comment by
novalinium on
Video/animation: Neel Nanda explains what mechanistic interpretability is ·
2023-02-22T23:57:32.720Z ·
LW ·
GW
A single word for this would be an animatic, probably.
Comment by
novalinium on
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) ·
2021-10-18T23:13:53.262Z ·
LW ·
GW
If you're asking why I believe that they don't require presence, I've been interviewing with them and that's my understanding from talking with them. The first line of copy on their website is
Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems.
Sounds pretty much like a safety org to me.
Comment by
novalinium on
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) ·
2021-10-18T17:31:50.165Z ·
LW ·
GW
Anthropic does not require consistent physical presence.