[Alignment] Is there a census on who's working on what?
post by Cedar (xida-ren) · 2022-05-23T15:33:28.267Z · LW · GW · No commentsThis is a question post.
Contents
Answers 7 Thomas Kwa 4 Jan 2 Josephm None No comments
I'm new to the Rationality / EA community, and I've been getting the sense that the best use of my skills and time is to try to contribute to alignment.
Currently, my focus has been guided by top posts / most often mentions of names, here on LW. E.g. "I see John Wentworth post a lot. I'll spend my time investigating him and his claims".
The problem with that is, I have some instincts developed in large communities that gives me a sense that everybody is already working on this, and that if I let my attention roam the natural way I am going to end up in a big pool doing duplicate work as everybody else.
Is there anything here on what demographics are working on what problems?
Answers
As far as I know, there is no good one, and this is a moderately-sized oversight by the rationality/EA community. In particular, there is no census of the number of people working on each AI alignment agenda. I want to create one as a side project, but I haven't had time. You might find the following partial data useful:
- The 2021 AI Alignment Literature Review and Charity Comparison [AF · GW] is the last overview of all active AI alignment organizations. Note that this excludes independent researchers like John Wentworth and Vanessa Kosoy, and does not have data on the size of each organization.
- The 2019 Leaders Forum [EA · GW] is the last instance when many EA organizations' beliefs about talent needs were aggregated
- The 2020 EA Survey [? · GW] is the latest data on what causes EAs think are important
As far as I know, there's nothing like this for the rationality community.
↑ comment by Thomas Kwa (thomas-kwa) · 2022-06-01T03:50:58.514Z · LW(p) · GW(p)
Also, the State of AI Report 2021 has a graph of the number of people working on long-term AI alignment research at various organizations (this graph is from slide 157):
↑ comment by Cedar (xida-ren) · 2022-05-30T17:26:58.449Z · LW(p) · GW(p)
Thanks Thomas! I really appreciate this!
As part of the AI Safety Camp our team is preparing a research report on the state of AI safety! Should be online within a week or two :)
↑ comment by Cedar (xida-ren) · 2022-05-24T18:55:26.641Z · LW(p) · GW(p)
Yooo! That sounds amazing. Please do let me know once that report is up!
There is a Google Sheet that lists many of the people working on alignment and some basic information about each person and their work. It's not supposed to be shared publicly, but I've sent it to you in a private message.
No comments
Comments sorted by top scores.