How many people are working (directly) on reducing existential risk from AI?

post by Benjamin Hilton (80000hours) · 2023-01-18T08:46:29.884Z · LW · GW · 1 comments

Contents

1 comment

1 comments

Comments sorted by top scores.

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2023-01-19T02:15:00.425Z · LW(p) · GW(p)

In the current climate, I think playing up the neglectedness and "directly working on x-risks" is somewhat likely be counterproductive, especially if not done carefully, some reasons:

1) It fosters an "us-vs-them" mindset.  
2) It fails to acknowledge that these researchers don't know what the most effective ways are to reduce x-risk, and there is not much consensus (and that which does exist is likely partially due to insular community epistemics).
3) It discounts the many researchers doing work that is technically indistinguishable the work by researchers "directly working on x-risks".
4) Concern about x-risk (or more generally, the impact of powerful AI) from AI researchers is increasing organically, and we want to welcome this concern, rather than (accidentally/implicitly/etc.) telling people they don't count.

I think we should be working to develop clearer ideas about which kinds of work is differentially useful for x-safety, seeking to build a broader (outside this community) consensus about that, and try to incentivize more explicit focus on x-safety.