Posts

Introducing the Anthropic Fellows Program 2024-11-30T23:47:29.259Z

Comments

Comment by Miranda Zhang (miranda-zhang) on DeepMind alignment team opinions on AGI ruin arguments · 2022-08-13T17:50:55.445Z · LW · GW

This was interesting and I would like to see more AI research organizations conducting + publishing similar surveys.

Comment by Miranda Zhang (miranda-zhang) on Pitching an Alignment Softball · 2022-06-08T22:39:29.798Z · LW · GW

I agree that AI safety can be successfully pitched to a wider range of audiences even without mentioning superintelligence, though I'm not sure this will get people to "holy shit, x-risk." However, I do think that appealing to the more near-term concerns that people have could be sufficiently concerning to policymakers and other important stakeholders, and possibly speed up their willingness to implement useful policy.

Of course, this assumes that useful policy for near-term concerns will also be useful policy for AI x-risk. It seems plausible to me that the most effective policies for the latter look quite different from policies that clearly overlap with both, but still seems directionally good!