Why I Am Skeptical of AI Regulation as an X-Risk Mitigation Strategy
post by A Ray (alex-ray) · 2022-08-06T05:46:48.416Z · LW · GW · 14 commentsContents
1. Regulations are ineffective at preventing bad behaviors 2. What does effective regulation look like 2.1. Defense Export Controls 2.2. Weirdly Strongly Enforced IP Laws 3. What I would like to see in AI Regulation proposals None 14 comments
Epistemic Signpost: I'm not an expert in policy research, but I think most of the points here are straightforward.
1. Regulations are ineffective at preventing bad behaviors
Here's a question/thought experiment: Can you think of any large, possibly technology related, company who has broken a regulation. What happened as a result? Was the conclusion the total cessation of that behavior?
I'm not bothering to look up examples (at the risk of losing a tiny bit of epistemic points) because these sorts of events are so freaking common. I'm not going into the more heinous cases, where it wasn't even plausible to anybody that the behaviors were defensible (and instead just expected to not get caught) or cases where the behavior was more profitable than the fine imposed. My point is that the regulations failed to prevent the behavior.
If you try to defend this with "Regulations are about incentives and not safety (prevention)" -- then I think you, too, should be pessimistic about AI Regulation as a strategy for X-Risk mitigation.
2. What does effective regulation look like
Lest I be accused of making a fully-general counterargument [? · GW], here's some examples of regulations that seem to actually work:
2.1. Defense Export Controls
The United States has a long history of enforcing export controls on defense-related technologies, and I'll just focus on the USML (United States Munitions List) and CCL (Commerce Control List) here.
The USML is the more "extreme" of the two, and this is what people are talking about if they're talking about ITAR (International Traffic in Arms Regulations). It covers things like firearms, missiles, and nuclear weapons, and licenses/export of these are pretty strictly controlled. I don't think anyone doubts that putting AI technology on the USML would cause some sort of impact, but it seems pretty unlikely (and also undesirable for other strategic reasons).
Lesser known is the CCL, and how it regulates a much more lightweight (but still strongly enforced) export license requirement. It's designed to facilitate American technology companies selling their tech abroad, even if the technology is relevant to national security.
An example relevant to AI technology is when the US government prevented the export of computer chips to China for use in a supercomputer. Everyone building a (publicly known) supercomputer claims it will be used for publicly good science, but also supercomputers are a very important tool for designing nuclear weapons (which is one of the reasons supercomputer parts are export controlled).
2.2. Weirdly Strongly Enforced IP Laws
While the US has a fairly strong internal system of IP protections, they are often insufficient at preventing IP from being used outside of US jurisdiction. Normally I think this means that IP protections are not good candidates for AI regulation, but there is at least one example I like.
(It's possible this is just defense export enforcement, but sometimes people talk about it as an IP enforcement, in particular for the older and longer ban on EUV export, as opposed to the recent pressure in DUV export.)
Semiconductor manufacturing equipment I think are also relevant to AI technology, and ASML is one of the most important manufacturers. Despite being a Dutch company, it's being prevented from selling latest generation (EUV) manufacturing equipment to China. This is ostensibly because the Dutch government hasn't approved an export license, but it seems to basically be common knowledge that this is due to the US government's influence.
3. What I would like to see in AI Regulation proposals
I think there's two things I would like to see on any AI regulation policy research that's purportedly for x-risk reduction:
First, an acknowledgement that the authors understand and are somehow not blind to the reality that regulations, especially those for large technology companies, are largely ineffective at curtailing behavior.
Second, they explain some very good reasons for why their plan will actually **work** at preventing the behavior.
Then I would be much more excited to hear your AI regulation policy proposal.
14 comments
Comments sorted by top scores.
comment by trevor (TrevorWiesinger) · 2022-08-06T07:20:36.313Z · LW(p) · GW(p)
When it comes to global industries that have ludicrously massive national security implications. such as the tech industry, regulation is more likely to function properly if it is good for national security, and regulation is less likely to function properly if it is bad for national security.
Weakening domestic AI industries is bad for national security. This point has repeatedly been made completely clear to the AI governance people with relevant experience in this area, since 2018 at the latest and probably years before. every year since then, they keep saying that they're not going to slow down AI via regulations.
This is obviously not the full story, but it's probably the most critical driving factor in AI regulation; at least, the one with the biggest information undersupply in the Lesswrong community, and possibly AI safety people in general. I'm really glad to see people approaching this problem from a different angle, diving into the details on the ground, instead of just making broad statements about international dynamics.
comment by Zach Stein-Perlman · 2022-08-06T15:04:50.466Z · LW(p) · GW(p)
I agree that effective regulation would be hard. Indeed, we should expect it to be particularly hard to regulate against powerful AI, since finding effective regulation in an emerging domain is hard and since (perceived) incentives to create powerful AI may be strong.
But to me, the upshot is less "regulation won't work" than "regulation needs teeth." I'm currently most excited about potential regulations that involve strong enforcement and either strong auditing/verification or reliable passive monitoring. (This is similar to the two things in your section 2, framed differently.)
comment by Nathan1123 · 2022-08-07T03:59:59.284Z · LW(p) · GW(p)
When it comes to AI regulation, a certain train of thought comes to my mind:
- Because a superintelligent AI has never existed, we can assume that creating one requires an enormous amount of energy and resources.
- Due to global inequality, certain regions of the world have exponentially more access to energy and resources than other regions.
- Therefore, when creating an AGI becomes possible, only a couple of regions of the world (and only a small number of people in these regions) will have the capability of doing so.
Therefore, enforcement of AI regulations only has to focus on this very limited population, and educate them on the existential threat of UFAI.
I think it is best to consider it analogous with another man-made existential threat, nuclear weapons. True, there is always a concern of a leak in international regulations (the Soviet nuclear arsenal that disappeared with the fall of the USSR, for example), but generally speaking there is a great filter of cost (such as procuring and refining uranium, training and educating domestic nuclear research, etc.) such that only a handful of nations in the world have ever built such weapons.
comment by TekhneMakre · 2022-08-06T06:44:33.421Z · LW(p) · GW(p)
Regulation by governments seems unlikely to work. But there's another kind of regulation: social. It's not so hard to list hypothetical biotechnologies that are so taboo that few respectable people would even discuss them, let alone publish papers about them. Can this social situation be, ahem, cloned for trying to create AGI?
Replies from: lc↑ comment by lc · 2022-08-06T07:14:17.495Z · LW(p) · GW(p)
Probably not.
Replies from: TrevorWiesinger, TekhneMakre↑ comment by trevor (TrevorWiesinger) · 2022-08-06T07:23:05.848Z · LW(p) · GW(p)
Agreed wholeheartedly. Even thinking about trying to pull something like this could directly cause catastrophically bad outcomes.
Replies from: lc, TekhneMakre↑ comment by lc · 2022-08-06T07:25:59.263Z · LW(p) · GW(p)
Even thinking about trying to pull something like this could directly cause catastrophically bad outcomes.
I wouldn't go that far, but yes, some possible action plans seem particularly unlikely to be net-positive.
Replies from: TrevorWiesinger↑ comment by trevor (TrevorWiesinger) · 2022-08-06T07:34:41.494Z · LW(p) · GW(p)
AI engineers seem to be a particularly sensitive area to me, they're taken very seriously as a key strategic resource.
↑ comment by TekhneMakre · 2022-08-06T09:18:43.159Z · LW(p) · GW(p)
What? How?
↑ comment by TekhneMakre · 2022-08-06T08:25:24.920Z · LW(p) · GW(p)
Why?
Replies from: lc↑ comment by lc · 2022-08-06T08:36:40.354Z · LW(p) · GW(p)
Let me put it like this: If a dictator suddenly rose to power in a first world country on the promise of building a Deep Learning enabled eutopia, and then used that ideology to rationalize the execution of eleven million civilians during WW3, that would be the first of a series of unlikely events to enable others to implement the taboo you seek, if you wanted to do it the way it was done the first time around.
There are intermediary positions. Capabilities engineers have sometimes been known to stop Capabilities Engineering when they realize they're going to end the world. But that particular moratorium on research was kind of a one time thing.
Replies from: TekhneMakre↑ comment by TekhneMakre · 2022-08-06T09:21:48.166Z · LW(p) · GW(p)
I didn't think the taboo about human cloning was due to WW2, but if it is then I'd buy your argument more. The first few results on a quick google-scholar search for "human cloning ethics" don't seem to mention "Nazi", but I don't know, maybe that's the root of it.
Edit: So for example, what if you get the Christians very upset about the creation of minds? (Not saying this is a good idea.)
↑ comment by Noosphere89 (sharmake-farah) · 2022-08-06T15:57:38.801Z · LW(p) · GW(p)
A large part of the reason biotechnologies could be banned is due to the fear of eugenics, which started after WWII. Essentially it lowered resistance to regulations based on that fear of eugenics. In other words, it accidentally prevented us from pursuing biotech.
Replies from: TekhneMakre↑ comment by TekhneMakre · 2022-08-06T18:20:20.643Z · LW(p) · GW(p)
Could you point to a source re/ cloning? What you say seems right w.r.t. to eugenics, and cloning is vaguely related to eugenics by being about applying technology to reproduction. My quick look, though, didn't find arguments about cloning mentioning Nazis. The influence could be hidden though, so maybe there's some social scientist who's tracked the anthropology of the taboo on cloning. As another example, consider gain-of-function research. I don't think that taboo is about WWII, I think it's about, like, don't create dangerous viruses. (Though that taboo is apparently maybe not strong enough.)