AI policy ideas: Reading list
post by Zach Stein-Perlman · 2023-04-17T19:00:00.604Z · LW · GW · 7 commentsContents
Lists Levers Other policy guidance Desiderata Ideas[1] Policy proposals in the mass media Responses to government requests for comment See also None 7 comments
Related: Ideas for AI labs: Reading list. See also: AI labs' statements on governance.
This document is about AI policy ideas. It's largely from an x-risk perspective. Strikethrough denotes sources that I expect are less useful for you to read.
Lists
Lists of government (especially US government) AI policy ideas. I recommend carefully reading the lists in the first ~5 bullets in this list, noticing ideas to zoom in on, and skipping the rest of this section.
- Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims (Brundage et al. 2020)
- Followed up by Filling gaps in trustworthy development of AI (Avin et al. 2021)
- 12 tentative ideas for US AI policy (Muehlhauser 2023) (EAF [EA · GW])
- Survey on intermediate goals in AI governance [EA · GW] (Räuker and Aird 2023)
- Policymaking in the Pause (FLI 2023) (LW)
- Frontier AI Regulation: Managing Emerging Risks to Public Safety (Anderljung et al. 2023)
- "30 actions to reduce existential risk" in Existential risk and rapid technological change (Stauffer et al. 2023)
- The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (Brundage et al. 2018)
- How major governments can help with the most important century (Karnofsky 2023) (LW)
- [Chuck Schumer] (2023)
- [Dan Hendrycks] (2023)
- "Discussion" in "Verifying Rules on Large-Scale NN Training via Compute Monitoring" (Shavit 2023)
- Future Proof (CLTR 2021)
Year 1 Report(National Artificial Intelligence Advisory Committee 2023)Final Report(National Security Commission on Artificial Intelligence 2021)Challenges to U.S. National Security and Competitiveness Posed by AI(Matheny 2023)"Policy Options" in "Artificial Intelligence and Strategic Trade Controls"(Viski et al. 2020)Existential and global catastrophic risk policy ideas database(filter for "Artificial intelligence") (Sepasspour et al. 2022)- Various private lists and works in progress
Levers
Some sources focus on policy levers rather than particular policy proposals.
- "Governmental levers" in "Literature Review of Transformative AI Governance" (Maas draft)
- This report is excellent
- AI Policy Levers: A Review of the U.S. Government's Tools to Shape AI Research, Development, and Deployment (Fischer et al. 2021)
- "Affordances" in "Framing AI strategy" (Stein-Perlman 2023)
- Current UK government levers on AI development [EA · GW] (Hadshar 2023)
- Standards (largely a non-government lever)
- How technical safety standards could promote TAI safety [EA · GW] (O'Keefe et al. 2022)
- Standards for AI Governance (Cihon 2019)
- Actionable-guidance and roadmap recommendations for the NIST AI Risk Management Framework (Barrett et al. 2022) (LW)
AI Risk Management Framework(NIST 2023)Ethically Aligned Design(IEEE 2017)Global AI Standards Repository(OCEANIS)
Other policy guidance
- Five considerations to guide the regulation of "General Purpose AI" in the EU's AI Act (AI Now Institute 2023)
AI Accountability Policy(National Telecommunications and Information Administration 2023)
Desiderata
Some sources focus on abstract desiderata rather than how to achieve them.
- Asilomar AI Principles (FLI 2017)
- Blueprint for an AI Bill of Rights (OSTP 2022)
- OECD AI Principles (OECD 2019)
- I think there are other relevant OECD reports or recommendations
- Policy Desiderata for Superintelligent AI (Bostrom et al. 2018)
- Universal Guidelines for Artificial Intelligence (The Public Voice 2018)
- AI Policy Challenges (FLI 2018)
Artificial Intelligence: the global landscape of ethics guidelines(Jobin et al. 2019)
Ideas[1]
- Verifying Rules on Large-Scale NN Training via Compute Monitoring (Shavit 2023) (LW [LW · GW])
- Why and How Governments Should Monitor AI Development (Whittlestone and Clark 2021)
- Regulatory Markets: The Future of AI Governance (Hadfield and Clark 2023)
- Why we need a new agency to regulate advanced artificial intelligence: Lessons on AI control from the Facebook Files (Korinek 2021)
- Compute Funds and Pre-trained Models (Anderljung 2022)
- AI & Antitrust: Reconciling Tensions Between Competition Law and Cooperative AI Development (Hua and Belfield 2021)
- See also Antitrust-compliant AI industry self-regulation (O'Keefe 2021)
- See also AI & antitrust/competition law (Aird 2022)
- How We Can Regulate AI (Balwit 2023) [compute governance]
- Regulating artificial intelligence: Proposal for a global solution (Erdélyi and Goldsmith 2022)
- Immigration Policy and the Global Competition for AI Talent (Huang and Arnold 2020)
- AI & Global Governance: Why We Need an Intergovernmental Panel for Artificial Intelligence (Miailhe 2018)
- Liability
- New: LPP on liability
- Products liability law as a way to address AI harms (Villasenor 2019)
- Liability for artificial intelligence and other emerging digital technologies (European Commission 2019).
- Footnote 16 of Collective action on artificial intelligence: A primer and review (de Neufville and Baum 2021) points to academic work on AI and liability
- Crafting Legislation to Prevent AI-Based Extinction (Cohen and Osborne 2023)
- This is technically a response to government request for comment, but is really a policy proposal
- An AI Policy Tool for Today: Ambitiously Invest in NIST (Anthropic 2023)
- Or see the policy memo version of this blogpost
- Sharing Powerful AI Models (Shevlane 2022)
- You Can't Regulate What You Don't Understand (O'Rielly 2023)
- Killer Robots Are Here—and We Need to Regulate Them (Trager and Luca 2022)
- Incident reporting & tracking
- See PAI's AI Incidents Database (arXiv, blogpost)
Policy proposals in the mass media
- AI's Gatekeepers Aren’t Prepared for What's Coming (FP: Scharre 2023)
- This piece is great
- The world needs an international agency for artificial intelligence, say two AI experts (Economist: Marcus and Reuel 2023)
- We Need a Manhattan Project for AI Safety (POLITICO: Hammond 2023)
- Does the world need an arms control treaty for AI? (CyberScoop: Groll 2023)
- AI Desperately Needs Global Oversight (WIRED: Chowdhury 2023)
- The Surprising Thing A.I. Engineers Will Tell You if You Let Them (NYT: Klein 2023)
- We Must Regulate A.I. Here’s How. (NYT: Khan 2023)
- We Must Declare Jihad Against A.I. (Compact: Cuenco 2023)
Responses to government requests for comment
Response to the NTIA AI Accountability Policy(GovAI 2023)GovAI Response to the Future of Compute Review - Call for Evidence(GovAI 2022)Future of compute review - submission of evidence(CLTR et al. 2022)Anthropic Comment Regarding "Study To Advance a More Productive Tech Economy"(Anthropic 2022)National Security Addition to the NIST AI RMF(Special Competitive Studies Project 2023)Reconfiguring Resilience for Existential Risk(CSER 2021)Response to the UK's Future of Compute Review(CLTR et al. 2023)Submission to the NIST AI Risk Management Framework(GovAI 2022)Submission of Feedback to the European Commission's Proposal for a Regulation laying down harmonised rules on artificial intelligence(CSER 2021)Advice to UN High-level Panel on Digital Cooperation(CSER and GovAI 2019)Consultation on the European Commission'sWhite Paper on Artificial Intelligence: a European approach to excellence and trust(GovAI 2020)
See also
Very non-exhaustive.
- AI Governance: A Research Agenda (Dafoe 2018)
- Slowing AI [? · GW] (Stein-Perlman 2023)
- Survey on intermediate goals in AI governance [EA · GW] (Räuker and Aird 2023)
- FACT SHEET: Biden-Harris Administration Announces New Actions to Promote Responsible AI Innovation that Protects Americans’ Rights and Safety (The White House 2023)
OECD AI Policy ObservatoryPAIand theAI and Shared Prosperity Initiative[Mark Warner](2023)EU AI Act(EU)CHIPS Act(US)A pro-innovation approach to AI regulation(UK Office for Artificial Intelligence 2023)
This post is largely missing policy levers beyond domestic policy: international relations,[2] international organizations, and standards.
Some sources are roughly sorted within sections by a combination of x-risk-relevance, quality, and influentialness– but sometimes I didn't bother to try to sort them, and I've haven't read all of them.
Please have a low bar to suggest additions, substitutions, rearrangements, etc.
Thanks to Jakub Kraus and Seán Ó hÉigeartaigh for some sources.
Last updated: 10 July 2023.
- ^
- I think these sources each focus on a particular policy idea– I haven't even skimmed all of them
- Very non-exhaustive
- Thanks to Sepasspour et al. 2022 and a private list for some sources - ^
International agreements seem particularly important and neglected.
One source (not focused on policy ideas or levers): Nuclear Arms Control Verification and Lessons for AI Treaties (Baker 2023).
Oliver Guest agrees that there are not amazing sources but mentions:
- CSET on international security in the context of military AI
- CNAS on international arms control and confidence-building measures for military AI
7 comments
Comments sorted by top scores.
comment by JakubK (jskatt) · 2023-04-30T09:37:37.149Z · LW(p) · GW(p)
- An AI Policy Tool for Today: Ambitiously Invest in NIST (Anthropic 2023)
- National Security Addition to the NIST AI RMF (Special Competitive Studies Project 2023)
- Existential risk and rapid technological change - a thematic study for UNDRR (Stauffer et al. 2023), especially section 4.3 ("30 actions to reduce existential risk")
- Crafting Legislation to Prevent AI-Based Extinction: Submission of Evidence to the Science and Technology Select Committee’s Inquiry on the Governance of AI (Cohen and Osborne 2023)
- Why we need a new agency to regulate advanced artificial intelligence: Lessons on AI control from the Facebook Files (Korinek 2021)
↑ comment by Zach Stein-Perlman · 2023-04-30T17:06:24.145Z · LW(p) · GW(p)
Thank you!!
- Will add this; too bad it's so meta
- Will read this-- probably it's worth adding and maybe it points to more specific sources also worth adding
- Will add this; too bad it's so meta
- Will add this
- Already have this one
comment by JakubK (jskatt) · 2023-04-23T19:30:19.064Z · LW(p) · GW(p)
Ezra Klein listed some ideas (I've added some bold):
The first is the question — and it is a question — of interpretability. As I said above, it’s not clear that interpretability is achievable. But without it, we will be turning more and more of our society over to algorithms we do not understand. If you told me you were building a next generation nuclear power plant, but there was no way to get accurate readings on whether the reactor core was going to blow up, I’d say you shouldn’t build it. Is A.I. like that power plant? I’m not sure. But that’s a question society should consider, not a question that should be decided by a few hundred technologists. At the very least, I think it’s worth insisting that A.I. companies spend a good bit more time and money discovering whether this problem is solvable.
The second is security. For all the talk of an A.I. race with China, the easiest way for China — or any country for that matter, or even any hacker collective — to catch up on A.I. is to simply steal the work being done here. Any firm building A.I. systems above a certain scale should be operating with hardened cybersecurity. It’s ridiculous to block the export of advanced semiconductors to China but to simply hope that every 26-year-old engineer at OpenAI is following appropriate security measures.
The third is evaluations and audits. This is how models will be evaluated for everything from bias to the ability to scam people to the tendency to replicate themselves across the internet.
Right now, the testing done to make sure large models are safe is voluntary, opaque and inconsistent. No best practices have been accepted across the industry, and not nearly enough work has been done to build testing regimes in which the public can have confidence. That needs to change — and fast. Airplanes rarely crash because the Federal Aviation Administration is excellent at its job. The Food and Drug Administration is arguably too rigorous in its assessments of new drugs and devices, but it is very good at keeping unsafe products off the market. The government needs to do more here than just write up some standards. It needs to make investments and build institutions to conduct the monitoring.
The fourth is liability. There’s going to be a temptation to treat A.I. systems the way we treat social media platforms and exempt the companies that build them from the harms caused by those who use them. I believe that would be a mistake. The way to make A.I. systems safe is to give the companies that design the models a good reason to make them safe. Making them bear at least some liability for what their models do would encourage a lot more caution.
The fifth is, for lack of a better term, humanness. Do we want a world filled with A. I. systems that are designed to seem human in their interactions with human beings? Because make no mistake: That is a design decision, not an emergent property of machine-learning code. A.I. systems can be tuned to return dull and caveat-filled answers, or they can be built to show off sparkling personalities and become enmeshed in the emotional lives of human beings.
comment by Zach Stein-Perlman · 2023-04-18T20:45:04.496Z · LW(p) · GW(p)
(Notes to self)
More sources to maybe integrate:
- https://arxiv.org/pdf/2305.15324.pdf#page=15 (DeepMind 2023)
- https://www.governance.ai/research-paper/national-priorities-for-artificial-intelligence-ostp-response
- https://arxiv.org/abs/2307.04699
- Stuff in https://www.lesswrong.com/posts/iFrefmWAct3wYG7vQ/ai-labs-statements-on-governance [LW · GW]
- https://fas.org/publication/six-ideas-for-national-ai-strategy/
- https://www.democrats.senate.gov/imo/media/doc/schumer_ai_framework.pdf
- https://www.helenabiosecurity.org
- https://www.governance.ai/research-paper/response-to-the-ntia-ai-accountability-policy
- https://carnegieendowment.org/2023/07/12/it-s-time-to-create-national-registry-for-large-ai-models-pub-90180
- https://www.csis.org/events/sen-chuck-schumer-launches-safe-innovation-ai-age-csis (see key quotes compiled by Mauricio)
- https://www.ias.edu/aipolicy
- https://twitter.com/ohlennart/status/1668694795789672471
- https://www.governance.ai/post/proposing-a-foundation-model-information-sharing-regime-for-the-uk
- https://www.theguardian.com/technology/2023/jun/05/ai-could-outwit-humans-in-two-years-says-uk-government-adviser
- https://futureoflife.org/ai-policy/fli-on-a-statement-on-ai-risk-and-next-steps/
- https://foreignpolicy.com/2023/06/21/china-united-states-semiconductor-chips-sanctions-evasion/
- Yoshua Bengio proposes banning agent-y AI: AI Scientists: Safe and Useful AI? (LW [LW · GW], LW [LW · GW])
- https://blog.google/technology/ai/a-policy-agenda-for-responsible-ai-progress-opportunity-responsibility-security/ and AI at Google: our principles and [Google on AI governance, https://ai.google/static/documents/perspectives-on-issues-in-ai-governance.pdf] and Microsoft stuff (https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW14Gtw? https://www.microsoft.com/en-us/ai/responsible-ai? https://www.microsoft.com/cms/api/am/binary/RE4pKH5?)
- Policy and investment recommendations for trustworthy Artificial Intelligence and Ethics guidelines for trustworthy AI
- https://www.caidp.org/news/ for policy news?
What should the see-also section do? I'm not sure. Figure that out. Consider adding general AI governance sources.
Maybe I should organize by topic...
Update, 30 October 2023: I haven't updated this in months. There's lots of new stuff. E.g. https://cset.georgetown.edu/article/regulating-the-ai-frontier-design-choices-and-constraints/ and https://arxiv.org/abs/2310.13625 and lots of other research-org stuff and lots of UK-AI-summit stuff.
comment by Sheikh Abdur Raheem Ali (sheikh-abdur-raheem-ali) · 2023-05-10T12:46:01.540Z · LW(p) · GW(p)
Future of compute review - submission of evidence
Prepared by:
- Dr Jess Whittlestone, Centre for Long-Term Resilience (CLTR)
- Dr Shahar Avin, Centre for the Study of Existential Risk (CSER), University of Cambridge
- Katherine Collins, Computational and Biological Learning Lab (CBL), University of Cambridge
- Jack Clark, Anthropic PBC
- Jared Mueller, Anthropic PBC
↑ comment by Zach Stein-Perlman · 2023-05-10T15:58:38.709Z · LW(p) · GW(p)
Thanks-- already have that as "Future of compute review - submission of evidence (CLTR et al. 2022)"
Replies from: sheikh-abdur-raheem-ali↑ comment by Sheikh Abdur Raheem Ali (sheikh-abdur-raheem-ali) · 2023-05-10T18:40:11.764Z · LW(p) · GW(p)
Oh, oops, somehow I saw the GovAI response link but not the original one just below it.