Survey on intermediate goals in AI governance
post by MichaelA, MaxRa · 2023-03-17T13:12:19.363Z · LW · GW · 3 commentsContents
Acknowledgments None 3 comments
It seems that a key bottleneck for the field of longtermism-aligned AI governance is limited strategic clarity (see Muehlhauser, 2020, 2021 [EA · GW]). As one effort to increase strategic clarity, in October-November 2022, we sent a survey to 229 people we had reason to believe are knowledgeable about longtermist AI governance, receiving 107 responses. We asked about:
- respondents’ “theory of victory” for AI risk (which we defined as the main, high-level “plan” they’d propose for how humanity could plausibly manage the development and deployment of transformative AI such that we get long-lasting good outcomes),
- how they’d feel about funding going to each of 53 potential “intermediate goals” for AI governance,[1]
- what other intermediate goals they’d suggest,
- how high they believe the risk of existential catastrophe from AI is, and
- when they expect transformative AI (TAI) to be developed.
We hope the results will be useful to funders, policymakers, people at AI labs, researchers, field-builders, people orienting to longtermist AI governance, and perhaps other types of people. For example, the report could:
- Broaden the range of options people can easily consider
- Help people assess how much and in what way to focus on each potential “theory of victory”, “intermediate goal”, etc.
- Target and improve further efforts to assess how much and in what way to focus on each potential theory of victory, intermediate goal, etc.
If you'd like to see a summary of the survey results, please request access to this folder. We expect to approve all access requests,[2] and will expect readers to abide by the policy articulated in "About sharing information from this report" (for the reasons explained there).
Acknowledgments
This report is a project of Rethink Priorities–a think tank dedicated to informing decisions made by high-impact organizations and funders across various cause areas. The project was commissioned by Open Philanthropy. Full acknowledgements can be found in the linked "Introduction & summary" document.
If you are interested in RP’s work, please visit our research database and subscribe to our newsletter.
- ^
Here’s the definition of “intermediate goal” that we stated in the survey itself:
By an intermediate goal, we mean any goal for reducing extreme AI risk that’s more specific and directly actionable than a high-level goal like ‘reduce existential AI accident risk’ but is less specific and directly actionable than a particular intervention. In another context (global health and development), examples of potential intermediate goals could include ‘develop better/cheaper malaria vaccines’ and ‘improve literacy rates in Sub-Saharan Africa’.
- ^
If two days after you request access you still haven't received access, this is probably just due to a mistake or delay on our end, so please request access again.
3 comments
Comments sorted by top scores.
comment by Olli Järviniemi (jarviniemi) · 2023-05-26T12:30:10.634Z · LW(p) · GW(p)
This survey is really good!
Speaking as someone who's exploring the AI governance landscape: I found the list of intermediate goals, together with the responses, a valuable compilation of ideas. In particular it made me appreciate how large the surface area is (in stark contrast to takes on how progress in technical AI alignment doesn't scale). I would definitely recommend this to people new to AI governance.
Replies from: MichaelA↑ comment by MichaelA · 2023-05-27T09:05:29.859Z · LW(p) · GW(p)
Glad to hear that!
I do feel excited about this being used as a sort of "201 level" overview of AI strategy and what work it might be useful to do. And I'm aware of the report being included in the reading lists / curricula for two training programs for people getting into AI governance or related work, which was gratifying.
Unfortunately we did this survey before ChatGPT and various other events since then, which have majorly changed the landscape of AI governance work to be done, e.g. opening various policy windows. So I imagine people reading this report today may feel it has some odd omissions / vibes. But I still think it serves as a good 201 level overview despite that. Perhaps we'll run a followup in a year or two to provide an updated version.
comment by MichaelA · 2023-03-17T13:20:13.639Z · LW(p) · GW(p)
...and while I hopefully have your attention: My team is currently hiring for a Research Manager! If you might be interested in managing one or more researchers working on a diverse set of issues relevant to mitigating extreme risks from the development and deployment of AI, please check out the job ad!
The application form should take <2 hours. The deadline is the end of the day on March 21. The role is remote and we're able to hire in most countries.
People with a wide range of backgrounds could turn out to be the best fit for the role. As such, if you're interested, please don't rule yourself out due to thinking you're not qualified unless you at least read the job ad first!