Stopping dangerous AI: Ideal US behavior

post by Zach Stein-Perlman · 2023-05-09T21:00:55.187Z · LW · GW · None comments

Disclaimer: this post doesn't have the answers. Moreover, it's an unfinished draft. Hopefully a future version will be valuable, but that will only occur if I revise/rewrite it. For now you're better off reading sources linked from AI policy ideas: Reading list [LW · GW] and Slowing AI: Reading list [? · GW].

Set aside most of AI safety to focus on the speed of AI progress. What should the US government do?[1] This post assumes that it's good to slow (dangerous) AI (especially near the end).

The ultimate goal is to prevent the deployment of powerful AI systems—that is, AI systems that would cause a catastrophe—until we learn how to make them safe. (Pretending alignment is monolithic.) You can do this by making powerful AI systems not developed, not deployed, or not cause a catastrophe if deployed because the world is super-resilient. Focusing on development is most promising.

So, delay the development of powerful AI systems until they're safe. (Probably that's too crude. Complications: (1) alignment progress is endogenous; (2) you have to actually pay the alignment tax so leading labs' lead time and safety-awareness are crucial.)

Lots of AI, like robotics and autonomous vehicles and some medical stuff and image generation and so forth, isn't very dangerous.

The dream would be that we have a test to determine whether an AI system would cause a catastrophe, and all large training runs are audited, and if a model fails the test then the training run is shut down. But (1) good tests don't exist yet, and (2) the only known way to make AI safe is to make it not powerful enough to cause a catastrophe. Related: model evals and Yonadav Shavit's Verifying Rules on Large-Scale Neural Network Training via Compute Monitoring (2023).

Ways governments can slow AI (this is kinda a list of policy levers rather than policy proposals)

An oft-mentioned kind of policy is clarifying or increasing AI developers' liability for AI harms. I currently feel confused about this. There are negative (and some positive) externalities in AI development, deployment, publication, and (lack of) security. It would be nice to internalize the externalities.

"What about China?" People sometimes assert that slowing down AI progress would be bad or politically infeasible because China wouldn't slow down, so China would "win" the "race." This is questionable for several reasons, but the real reason it's a bad objection to slowing AI is that labs and Western governments have tools to slow down China even more than the West.

  1. ^

    Note that what one should nudge the US government to do on the margin in the real world is not in exactly the same direction as what it should do to act optimally.

None comments

Comments sorted by top scores.