AI policy ideas: Reading list

post by Zach Stein-Perlman · 2023-04-17T19:00:00.604Z · LW · GW · 7 comments

Contents

  Lists
  Levers
  Other policy guidance
  Desiderata
  Ideas[1]
  Policy proposals in the mass media
  Responses to government requests for comment
  See also
None
7 comments

Related: Ideas for AI labs: Reading list. See also: AI labs' statements on governance.

This document is about AI policy ideas. It's largely from an x-risk perspective. Strikethrough denotes sources that I expect are less useful for you to read.

Lists

Lists of government (especially US government) AI policy ideas. I recommend carefully reading the lists in the first ~5 bullets in this list, noticing ideas to zoom in on, and skipping the rest of this section.

Levers

Some sources focus on policy levers rather than particular policy proposals.

Other policy guidance

Desiderata

Some sources focus on abstract desiderata rather than how to achieve them.

Ideas[1]

Policy proposals in the mass media

Responses to government requests for comment

See also

Very non-exhaustive.


This post is largely missing policy levers beyond domestic policy: international relations,[2] international organizations, and standards.

Some sources are roughly sorted within sections by a combination of x-risk-relevance, quality, and influentialness– but sometimes I didn't bother to try to sort them, and I've haven't read all of them.

Please have a low bar to suggest additions, substitutions, rearrangements, etc.

Thanks to Jakub Kraus and Seán Ó hÉigeartaigh for some sources.

Last updated: 10 July 2023.

  1. ^

    - I think these sources each focus on a particular policy idea– I haven't even skimmed all of them
    - Very non-exhaustive
    - Thanks to Sepasspour et al. 2022 and a private list for some sources

  2. ^

    International agreements seem particularly important and neglected.

    One source (not focused on policy ideas or levers): Nuclear Arms Control Verification and Lessons for AI Treaties (Baker 2023).

    Oliver Guest agrees that there are not amazing sources but mentions:

    - CSET on international security in the context of military AI
    - CNAS on international arms control and confidence-building measures for military AI

7 comments

Comments sorted by top scores.

comment by JakubK (jskatt) · 2023-04-30T09:37:37.149Z · LW(p) · GW(p)Replies from: Zach Stein-Perlman
comment by Zach Stein-Perlman · 2023-04-30T17:06:24.145Z · LW(p) · GW(p)

Thank you!!

  • Will add this; too bad it's so meta
  • Will read this-- probably it's worth adding and maybe it points to more specific sources also worth adding
  • Will add this; too bad it's so meta
  • Will add this
  • Already have this one
comment by JakubK (jskatt) · 2023-04-23T19:30:19.064Z · LW(p) · GW(p)

Ezra Klein listed some ideas (I've added some bold):

The first is the question — and it is a question — of interpretability. As I said above, it’s not clear that interpretability is achievable. But without it, we will be turning more and more of our society over to algorithms we do not understand. If you told me you were building a next generation nuclear power plant, but there was no way to get accurate readings on whether the reactor core was going to blow up, I’d say you shouldn’t build it. Is A.I. like that power plant? I’m not sure. But that’s a question society should consider, not a question that should be decided by a few hundred technologists. At the very least, I think it’s worth insisting that A.I. companies spend a good bit more time and money discovering whether this problem is solvable.

The second is security. For all the talk of an A.I. race with China, the easiest way for China — or any country for that matter, or even any hacker collective — to catch up on A.I. is to simply steal the work being done here. Any firm building A.I. systems above a certain scale should be operating with hardened cybersecurity. It’s ridiculous to block the export of advanced semiconductors to China but to simply hope that every 26-year-old engineer at OpenAI is following appropriate security measures.

The third is evaluations and audits. This is how models will be evaluated for everything from bias to the ability to scam people to the tendency to replicate themselves across the internet.

Right now, the testing done to make sure large models are safe is voluntary, opaque and inconsistent. No best practices have been accepted across the industry, and not nearly enough work has been done to build testing regimes in which the public can have confidence. That needs to change — and fast. Airplanes rarely crash because the Federal Aviation Administration is excellent at its job. The Food and Drug Administration is arguably too rigorous in its assessments of new drugs and devices, but it is very good at keeping unsafe products off the market. The government needs to do more here than just write up some standards. It needs to make investments and build institutions to conduct the monitoring.

The fourth is liability. There’s going to be a temptation to treat A.I. systems the way we treat social media platforms and exempt the companies that build them from the harms caused by those who use them. I believe that would be a mistake. The way to make A.I. systems safe is to give the companies that design the models a good reason to make them safe. Making them bear at least some liability for what their models do would encourage a lot more caution.

The fifth is, for lack of a better term, humanness. Do we want a world filled with A. I. systems that are designed to seem human in their interactions with human beings? Because make no mistake: That is a design decision, not an emergent property of machine-learning code. A.I. systems can be tuned to return dull and caveat-filled answers, or they can be built to show off sparkling personalities and become enmeshed in the emotional lives of human beings.

comment by Zach Stein-Perlman · 2023-04-18T20:45:04.496Z · LW(p) · GW(p)

(Notes to self)

More sources to maybe integrate:

What should the see-also section do? I'm not sure. Figure that out. Consider adding general AI governance sources.

Maybe I should organize by topic...

 

 

Update, 30 October 2023: I haven't updated this in months. There's lots of new stuff. E.g. https://cset.georgetown.edu/article/regulating-the-ai-frontier-design-choices-and-constraints/ and https://arxiv.org/abs/2310.13625 and lots of other research-org stuff and lots of UK-AI-summit stuff.

comment by Sheikh Abdur Raheem Ali (sheikh-abdur-raheem-ali) · 2023-05-10T12:46:01.540Z · LW(p) · GW(p)

Future of compute review - submission of evidence

Prepared by: 

  • Dr Jess Whittlestone, Centre for Long-Term Resilience (CLTR) 
  • Dr Shahar Avin, Centre for the Study of Existential Risk (CSER), University of Cambridge
  • Katherine Collins, Computational and Biological Learning Lab (CBL), University of Cambridge
  • Jack Clark, Anthropic PBC 
  • Jared Mueller, Anthropic PBC
Replies from: Zach Stein-Perlman
comment by Zach Stein-Perlman · 2023-05-10T15:58:38.709Z · LW(p) · GW(p)

Thanks-- already have that as "Future of compute review - submission of evidence (CLTR et al. 2022)"

Replies from: sheikh-abdur-raheem-ali
comment by Sheikh Abdur Raheem Ali (sheikh-abdur-raheem-ali) · 2023-05-10T18:40:11.764Z · LW(p) · GW(p)

Oh, oops, somehow I saw the GovAI response link but not the original one just below it.