Cruxes on US lead for some domestic AI regulation

post by Zach Stein-Perlman · 2023-09-10T18:00:06.959Z · LW · GW · 3 comments

Contents

  Cruxes
None
3 comments

Written quickly. Suggestions welcome.

A possible risk of some US AI regulation is that US regulation would differentially slow US AI progress and that would be bad. This post explores the factors that determine how much US regulation would differentially slow US AI progress and how bad that would be.

Note that the differentially slowing US problem only applies to regulation that slows US AI progress (toward powerful/dangerous systems), such as strong regulation of large training runs. The US can do things like facilitate incident reporting and clarify AI labs' liability for harms without slowing domestic AI progress, and some regulation (especially restricting the publication of AI research and sharing of model weights) would differentially slow foreign AI progress!

Note that international coordination on AI safety mostly avoids this problem.

Cruxes

If I was making a model of the differentially slowing US problem, these would be its factors.

(Here "China" often can mean any foreign state. Actual-China seems most relevant because it's well-positioned to lead on AI in worlds where strong US regulation slows US AI progress.)

Two questions that seem particularly important are extraterritoriality and the "effectively move" question. I suspect some people have a good sense of the extent to which AI regulation would be extraterritorialized and what that depends on, and some people have a good sense of the extent to which labs can effectively hop regulatory jurisdictions and what that depends on. If you know, please let me know!


The US government should do speed-orthogonal safety stuff (e.g. facilitating safety features on hardware, liability, training run reporting, incident reporting). The US government should slow foreign progress (e.g. restricting publishing research, restricting sharing research artifacts like model weights, doing export controls, and security standards). My guess is that the US government should avoid slowing leading labs much; things that would change my mind include foreign labs seeming further behind than I currently believe or leading labs seeming less (relatively) safe than I currently believe.


Thanks to two people for discussing some of these ideas with me.

  1. ^

     Enforcing some best practices for safety wouldn't really hurt speed. Some important regulation would.

  2. ^

     To the extent that leading labs are already doing what a regulation would require, the regulation doesn't slow US AI progress, but it doesn't improve safety much either. (It would have the minor positive effects of requiring less cautious labs to be safer, preventing leading labs from becoming much less safe, and maybe causing future regulation to be more productive.)

  3. ^

    My impression: very unlikely.

  4. ^

    Or deny talent, but that seems less important.

  5. ^

    My impression: a lot.

  6. ^

    This seems less important than safety, but my impression is: moderately.

3 comments

Comments sorted by top scores.

comment by trevor (TrevorWiesinger) · 2023-09-10T18:48:17.757Z · LW(p) · GW(p)

Strong upvoted. I agree that the more people look at the topic of AI regulation, the more they converge around US-China affairs and wondering why governments care about AI in the first place.

I don't really know anything about regulation, but the question of "why governments care about AI in the first place" is something that I've been laser-focused on for several years now. It's a really big question and the models have lots of gears. It's pretty hard to answer questions like this in a comment, as opposed to a DM, but I can definitely say that I've written about a lot of these cruxes in my post on information warfare [LW · GW], and Ethan Edwards did some fantastic research [LW · GW] which, along with many other goalposts relevant to geopolitics, he thoroughly demonstrates that LLMs are a match made in heaven for things like social media public opinion analysis/steering and analysis of bulk email collection. Since the end of the cold war, one of the big goalposts in international affairs and security agencies is preventing entire governments from having the rug pulled out from under them, like with East Germany, and unlike nuclear weapons and military force, information/hybrid warfare is a battleground where countries like the US and China can actually lose and win and pursue an endgame, similar to how economic catastrophes are unambiguously a deciding factor for whether the US or China will end up more powerful in 2030.

comment by 1a3orn · 2023-09-11T15:14:40.810Z · LW(p) · GW(p)

I think these are very good questions, and I thank you for making this list.

I also think this is a pretty bad title, and promotes an unfortunately adversarial approach on this question.

Considering China is a necessary part of AI regulation deliberation, and if it only comes up as an adversarial objection to proposed regulation, as in "but China," then earlier deliberation is not taking place as it should. If you're considering like, US industrial policy, you should consider how it is influenced by and influences China before some third party is like "Yo, what about China and your industrial party?"; a fortiori for AI.

Condensing another set of concerns into "but FOOM" or "but stagnation" or "but tyranny" would also be unfruitful. "Cruxes on China for Domestic AI regulation" or something seems a better title.

Replies from: Zach Stein-Perlman
comment by Zach Stein-Perlman · 2023-09-11T15:37:18.967Z · LW(p) · GW(p)

Sounds right, I'll remove "but China" and reframe away from responding to an adversarial objection. (For posterity: the title was "But China": Some cruxes for some domestic AI regulation.) Thanks.

(I do think "but China" is better than e.g. "but FOOM" because it carves reality close to the joints to divide some US AI policy questions into effect if US keeps its lead and effect on US lead, and "but China" is the latter set of considerations. But that doesn't make the title good.)