How is AI governed and regulated, around the world?

post by Mitchell_Porter · 2023-03-30T15:36:55.987Z · LW · GW · 6 comments

Contents

6 comments

This week, first came an open letter calling for a "pause" in "giant AI" research (LW discussion [LW · GW]) that has received worldwide media coverage; then Eliezer went further in TIME Ideas and sketched what a globally enforced ban on AGI research would look like (LW discussion [LW · GW]). 

Many online voices are saying it could never happen, but I think they underestimate the visceral common-sense fear that many ordinary people have, regarding artificial intelligence. Most people are not looking to transcend humanity, nor are they particularly in denial about the possibility of technology producing something smarter than human. 

There is genuine potential for an anti-AI movement to come into being, that simply wants to "shut it all down". Of course, such a movement would quickly run up against the power centers in science, commerce, and national security that want to push the boundaries. Between the corporate scramble to develop and market ever more powerful software, and the new era of geopolitical polarization, it might seem impossible that the "AI arms race" could ever be halted. 

However, fear is a great motivator. During the cold war, fear of nuclear war compelled the USA and the USSR to restrain themselves in their otherwise unrestrained struggle for supremacy; and it also led to the creation of the Nuclear Non-Proliferation Treaty and the International Atomic Energy Agency, a system for the worldwide management of nuclear technology that ultimately answers to the United Nations Security Council, the most powerful institution in human affairs. 

If the member states of the United Nations, and in particular the ruling elites of the permanent members of the Security Council, genuinely became convinced that sufficiently powerful artificial intelligence is a threat to the human race, they truly could organize a global ban on AGI research, even up to the point of military enforcement of the ban. They would have to deal with mutual distrust, but there are ways around that. They could even remain suspicious of each other, while cooperating to force a ban on everyone else - to mention one example. 

I won't express an opinion on how successful such a ban might be, or how long it would last; but the creation of a global anti-AI regime, I think is a political possibility. 

However, if it were to happen, it would have to develop out of the frameworks for regulation and governance of AI, that the world's nations are already developing, individually and collectively. That's why I made this post - to collect information on how AI is currently regulated. It would be nice to have some facts on how it is regulated in each of the G-20 countries, for example. 

For now I'll just link to Wikipedia: 

Regulation of artificial intelligence

which currently has sections on AI regulation in three out of the five permanent Security Council members, Britain, America, and China (Russia and France are not mentioned, though there are sections on European regulation). 

6 comments

Comments sorted by top scores.

comment by Noosphere89 (sharmake-farah) · 2023-03-30T16:02:32.855Z · LW(p) · GW(p)

The problem is Eliezer's article explicitly calls for violence via airstrikes, which is to be a little blunt, almost certainly never going to be accepted by the major powers, and that's a huge amount of negative PR that AI companies can use against the AI doom people.

And even if it was accepted, I'd still wouldn't agree with the article, due to multiple reasons, but the article by Eliezer is basically giving free ammunition to AI companies to show why AI doom is bad.

Replies from: Mitchell_Porter, None
comment by Mitchell_Porter · 2023-03-31T01:20:05.825Z · LW(p) · GW(p)

violence via airstrikes ... almost certainly never going to be accepted by the major powers

They can accept it if they are not the ones who get bombed. 

I point to the NNPT as precedent. There is one rule for the nuclear weapons states, and another rule for everyone else. The nuclear weapons states get to keep their nukes, everyone else agrees not to develop them. 

In this case it's a little different, because the premise is that AGI is safe for no one. But it can work like this. Let's suppose that as with the NNPT, it's the five permanent members of the UN Security Council who are the privileged states. Then the distinction is between how the five enforce the AGI ban among each other, and how they enforce it among everyone else. Among each other, they can be all collegial and understanding of each other's interests. For everyone else, diplomacy is given a chance, but there is much less patience for wilful evaders and violators of the ban. 

Replies from: Jeff Rose
comment by Jeff Rose · 2023-03-31T01:45:32.007Z · LW(p) · GW(p)

Non-signatories to the NPT  (Israel, India, Pakistan), were able to and did develop nuclear weapons without being subject to military action. By contrast (and very much contrary to international law) Yudkowsky proposes that non-signatories to his treaty be subject to bombardment.  

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2023-03-31T07:28:06.451Z · LW(p) · GW(p)

Yes, the analogy is imperfect. An anti-AGI treaty with the absoluteness that Eliezer describes, would treat the creation of AGI not just as an increase in danger that needs to be deterred, but as a tipping point that must never be allowed to happen in the first place. And that could lead to military intervention in a specific case, if lesser interventions (diplomacy, sabotage) failed to work. 

Whether such military intervention - a last resort - would satisfy international law or not, depends on the details. If all the great powers supported such a treaty, and if e.g. the process of its application was supervised by the Security Council, I think it would necessarily be legal. 

On the other hand, if tomorrow some state on its own attacked the AI infrastructure of another state, on the grounds that the second state is endangering humanity... I'm sure lawyers could be found to argue that it was a lawful act under some principle or statute; but their arguments might meet resistance. 

The main thing I am arguing is that a global anti-AI regime does not inherently require nuclear brinkmanship or sovereign acts of war. 

comment by [deleted] · 2023-03-30T16:08:21.165Z · LW(p) · GW(p)

The other issue is that falls out of this is that if anyone does successfully defect in secret, while every other power honors the ban, they get an insurmountable advantage.  Self replicating factories, buried deep underground or under the ocean could give the side that does this a certain victory and control of the planet.  Nukes won't be enough etc, you can't deal with an exponential problem with a linear amount of weapons.  (once there are more self replicating nodes than the number of nukes on the planet, victory for the side with the AGI is probably certain.  Conventional military would not be able to deal with swarm attacks, perfect aim and inter machine coordination and so on)

Victory comes for whichever side has control/monitoring over the majority of the planet, including the oceans, first.  I don't know of any technology that can achieve this that doesn't first require you to have a controllable network of systems with the capabilities of AGI/ASI.

Controlling the GPUs isn't enough, it's too easy to build similar devices.  Economics has concentrated most of the fabs in TSMC because it's more efficient, but in a world where we know AGI is possible and GPU/TPUs are a strategic advantage, you would expect every world power to start building it's own.

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-03-31T18:37:48.529Z · LW(p) · GW(p)

Yes, if diplomacy fails and it does come to an uncontrollable state and chaos/dangerous-hidden-compute/violence is the likely outcome... In the long run the exponential wins, but existing military power starts with an advantage. So a decisive early strike can abort early-stage runaway RSI-ing AGI. I really hope it won't come to that. I really hope that more of a 'worldwide monitoring and policing' action will be adequate to prevent defection.

 

Currently, the US military has good enough satellite monitoring of the world to detect most large scale engineering projects like undersea datacenter construction. This isn't easy, undersea datacenter construction is easier than it sounds... Example: https://news.microsoft.com/source/features/sustainability/project-natick-underwater-datacenter/