Slowing AI: Reading list
post by Zach Stein-Perlman · 2023-04-17T14:30:02.467Z · LW · GW · 3 commentsContents
Slowing AI Particular (classes of) interventions & affordances Making AI risk legible to AI labs and the ML research community Transparency & coordination Compute governance Standards Regulation Publication practices Differentially advancing safer paths Actors' levers Racing & coordination Technological restraint Other People[2] None 3 comments
This is a list of past and present research that could inform slowing AI. It is roughly sorted in descending order of priority, between and within subsections. I've read about half of these; I don't necessarily endorse them. Please have a low bar to suggest additions, replacements, rearrangements, etc.
Slowing AI
There is little research focused on whether or how to slow AI progress.[1]
- EA Forum AI Pause Debate [EA · GW] (2023)
- Slowing AI [? · GW] (Stein-Perlman 2023)
- "Slowing down AI" (Outline and Supplement 1) (Gloor unpublished)
- Let’s think about slowing down AI [LW · GW] (Grace 2022)
- "How bad/good would shorter AI timelines be?" (Aird unpublished)
- Instead of technical research, more people should focus on buying time [LW · GW] (Larsen et al. 2022)
- Is it time for a pause? [EA · GW] (Piper 2023)
- Why Not Slow AI Progress? (Alexander 2022)
- "The Containing/Decelerationist Approach to Transformative AI Governance" (Maas draft)
- What an actually pessimistic containment strategy looks like [LW · GW] (lc 2022)
Particular (classes of) interventions & affordances
Making AI risk legible to AI labs and the ML research community
- “AI Risk Discussions” website: Exploring interviews from 97 AI Researchers [LW · GW] (Gates et al. 2023)
- Interviews with 97 AI Researchers: Quantitative Analysis [LW · GW] (Shermohammed and Gates 2023)
- "AGI risk advocacy" (Lintz draft)
- Ways to buy time [LW · GW] (Larsen et al. 2022)
- What AI Safety Materials Do ML Researchers Find Compelling? [LW · GW] (Gates and Burns 2022)
Transparency & coordination
Relates to "Racing & coordination." Roughly, that subsection is about world-modeling and threat-modeling and this subsection is about solutions and interventions.
See generally Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims (Brundage et al. 2020).
- Transparency
- Exploring the Relevance of Data Privacy-Enhancing Technologies for AI Governance Use Cases (Bluemke et al. 2023)
- This builds on Beyond Privacy Trade-offs with Structured Transparency (Trask and Bluemke et al. 2020)
- Honest organizations (Christiano 2018)
- Exploring the Relevance of Data Privacy-Enhancing Technologies for AI Governance Use Cases (Bluemke et al. 2023)
- Auditing & certification
- What does it take to catch a Chinchilla? Verifying Rules on Large-Scale Neural Network Training via Compute Monitoring (Shavit 2023)
- Auditing large language models: a three-layered approach (Mökander et al. 2023)
- The first two authors have other relevant-sounding work on arXiv
- AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries (Cihon et al. 2021)
- Private literature review (2021)
- Model evaluations
- Safety evaluations and standards for AI (Barnes 2023)
- Probably private docs
- Whistleblowing
- Williamson unpublished
- Coordination proposals or mechanisms
- Probably private docs
Compute governance
- Advice and resources for getting into technical AI governance (Aarne unpublished)
- Video and Transcript of Presentation on Introduction to Compute Governance (Heim 2023)
- What does it take to catch a Chinchilla? Verifying Rules on Large-Scale Neural Network Training via Compute Monitoring (Shavit 2023)
- Lennart Heim on the compute governance era and what has to come after (80,000 Hours 2023)
- Nuclear Arms Control Verification and Lessons for AI Treaties (Baker 2023)
- The Semiconductor Supply Chain and Securing Semiconductor Supply Chains (Khan 2021)
Standards
- How technical safety standards could promote TAI safety [EA · GW] (O'Keefe et al. 2022)
- Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development (Cihon 2019)
Regulation
- …
Publication practices
- "Publication norms for AI research" (Aird unpublished)
- "Publication Policies & Model-sharing policies for AI labs" (Wasil draft)
- Shift AI publication norms toward "don't always publish everything right away" in Survey on intermediate goals in AI governance [EA · GW] (Räuker & Aird 2023)
Differentially advancing safer paths
- Thoughts on sharing information about language model capabilities
- Maybe Pragmatic AI Safety [? · GW] (Hendrycks and Woodside 2022); probably not very relevant
Actors' levers
There are some good lists and analyses, but not focused on slowing AI.
- "Levers of governance" in "Literature Review of Transformative AI Governance" (Maas draft)
- AI Policy Levers: A Review of the U.S. Government’s Tools to Shape AI Research, Development, and Deployment (Fischer et al. 2021)
- "Affordances" in "Framing AI strategy" [LW(p) · GW(p)] (Stein-Perlman 2023)
- Various private collections of AI policy ideas, notably including "AI Policy Ideas Database" (private work in progress)
- Current UK government levers on AI development [EA · GW] (Hadshar 2023)
- Existential and global catastrophic risk policy ideas database (filter for "Artificial intelligence") (Sepasspour et al. 2022)
Racing & coordination
Racing for powerful AI, how actors that develop AI act, and how actors could coordinate to decrease risk. Understanding how labs act and racing for powerful AI seem to be wide open problems, as does giving an account of the culture of progress and publishing in AI labs and the ML research community.
I've read less than half of these; possibly many of them are off-point or bad.
- An AI Race for Strategic Advantage: Rhetoric and Risks (Cave and Ó hÉigeartaigh 2018)
- "The arms race model and its alternatives" in "Let’s think about slowing down AI" [LW · GW] (Grace 2022)
- Racing to the Precipice: a Model of Artificial Intelligence Development (Armstrong et al. 2016)
- Uncertainty, information, and risk in international technology races [EA · GW] (Emery-Xu et al. unpublished)
- Collective Action on Artificial Intelligence: A Primer and Review (de Neufville and Baum 2021)
- Strategic Implications of Openness in AI Development (Bostrom 2017)
- The Role of Cooperation in Responsible AI Development (Askell et al. 2019)
- "International Security" in AI Governance: A Research Agenda (Dafoe 2018)
- Industrial Policy for Advanced AI: Compute Pricing and the Safety Tax (Jensen et al. 2023)
- Emerging Technologies, Prestige Motivations and the Dynamics of International Competition (Barnhart 2022)
- Modelling and Influencing the AI Bidding War: A Research Agenda (Han et al. 2019)
- Beyond MAD?: The Race for Artificial General Intelligence (Ramamoorthy and Yampolskiy 2018)
- The Race for an Artificial General Intelligence: Implications for Public Policy (Naudé and Dimitri 2018)
- Avoiding the Precipice (AI Roadmap Institute 2017)
Technological restraint
- Paths Untaken: The History, Epistemology and Strategy of Technological Restraint, and lessons for AI [EA · GW] (Maas 2022)
- And his Pausing AI? (2023) and its slides
- Automating Public Services: Learning from Cancelled Systems (Redden et al. 2022)
- What we’ve learned so far from our technological temptations project and Incentivized technologies not pursued (AI Impacts 2023)
- Untitled [list of potential examples of technological restraint] (Maas draft)
- Preliminary survey of prescient actions (Korzekwa 2020)
- Efficacy of AI Activism: Have We Ever Said No? [EA · GW]
- What does it take to ban a thing? (qbolec 2023)
Other
- Syllabus: Artificial Intelligence and International Security (Zwetsloot 2018)
- Karnofsky nearcasting: How might we align transformative AI if it’s developed very soon?, Nearcast-based "deployment problem" analysis, and Racing through a minefield: the AI deployment problem (LW) (Karnofsky 2022)
- "AI & antitrust/competition law" (Aird unpublished)
- Policymaking in the Pause [LW · GW] (FLI 2023)
- List of slowdown/halt AI requests [LW · GW] (Nardo 2023)
- Leveraging IP for AI governance (Schmit et al. 2023)
People[2]
There are no experts on slowing AI, but there are people who it might be helpful to talk to, including (disclaimer: very non-exhaustive) (disclaimer: I have not talked to all of these people):
- Zach Stein-Perlman
- Lukas Gloor
- Jeffrey Ladish
- Matthijs Maas
- Specifically on technological restraint
- Akash Wasil
- Especially on publication practices or educating the ML community about AI risk
- Michael Aird
- Vael Gates
- Specifically on educating the ML community about AI risk; many other people might be useful to talk to about this, including Shakeel Hashim, Alex Lintz, and Kelsey Piper
- Katja Grace
- Lennart Heim on hardware policy
- Onni Aarne on hardware
- Probably some other authors of research listed above
- ^
There is also little research on particular relevant considerations, like how multipolarity among labs relates to x-risk and to slowing AI, or how AI misuse x-risk and non-AI x-risk relate to slowing AI.
- ^
I expect there is a small selection bias where the people who think and write about slowing AI are disposed to be relatively optimistic about it.
3 comments
Comments sorted by top scores.
comment by Fabian Schimpf (fasc) · 2023-09-13T10:12:39.670Z · LW(p) · GW(p)
Thank you for compiling this list. This is useful, and I expect to point people to it in the future. The best thing, IMO, is that it is not verbose and not dripping with personal takes on the problem; I would like to see more compilations of topics like this to give other people a leg up when they aspire to venture into a field.
A potential addition is Dan Hendryck's PAIS agenda, in which he advocates for ML research that promotes alignment without also causing advances in capabilities. This effectively also slows AI (capabilities) development, and I am quite partial to this idea.
↑ comment by Zach Stein-Perlman · 2023-09-13T17:40:52.670Z · LW(p) · GW(p)
Yay.
Many other collections / reading lists exist, and I'm aware of many public and private ones in AI strategy, so feel free to DM me strategy/governance/forecasting topics you'd want to see collections on.
I haven't updated this post much since April but I'll update it soon and plan to add PAIS [? · GW], thanks.
comment by harsimony · 2023-04-17T20:22:23.756Z · LW(p) · GW(p)
Thanks for writing this!
In addition to regulatory approaches to slowing down AI development, I think there is room for "cultural" interventions within academic and professional communities that discourage risky AI research:
https://www.lesswrong.com/posts/ZqWzFDmvMZnHQZYqz/massive-scaling-should-be-frowned-upon [LW · GW]