Reviewing the Structure of Current AI Regulations

post by Deric Cheng (deric-cheng), Elliot Mckernon (elliot) · 2024-05-07T12:34:17.820Z · LW · GW · 0 comments

Contents

  What are possible approaches to classify AI systems for governance?  
    Point of Regulation
  What are important tradeoffs when designing regulatory structures for AI governance? 
    Centralized vs. Decentralized Enforcement
    Vertical vs Horizontal Regulations
  How are leading governments approaching AI Governance? 
    China
    What are key traits of China’s AI governance strategy?
      China’s governance strategy is focused on tracking and managing algorithms by their domain of use: 
      China is taking a vertical, iterative approach to developing progressively more comprehensive legislation, by passing targeted regulations concentrating on a single type of algorithm at a time:
      China strongly prioritizes social control and alignment in its AI regulations: 
      China has demonstrated an inward focus on regulating Chinese organizations and citizens: 
    The EU
    What are key traits of the EU’s AI governance strategy?
      The EU AI Act is a horizontally integrated, comprehensive piece of legislation implemented by a centralized body: 
      The EU has demonstrated a clear prioritization for the protection of citizen’s rights: 
    The US
    What are key traits of the US’ AI governance strategy?
      The US’ initial binding regulations focus on classifying AI models by compute ability and regulating hardware:
      Beyond export controls, the US appears to be pursuing a decentralized, largely non-binding approach relying on executive action:
      US AI policy is strongly prioritizing its geopolitical AI arms race with China:
None
No comments

This report is one in a series of ~10 posts comprising a 2024 State of the AI Regulatory Landscape Review, conducted by the Governance Recommendations Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance (such as incident reporting [EA · GW]safety evals [EA · GW], model registries [LW · GW], and more). We’ll provide an overview of existing regulations, focusing on the US, EU, and China as the leading governmental bodies currently developing AI legislation. Additionally, we’ll discuss the relevant context behind each domain and conduct a short analysis.

This series is intended to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current AI governance space. We’ll publish individual posts on our website and release a comprehensive report at the end of this series.

In this post, we’ll discuss a multifaceted, high-level topic: How are current AI regulatory policies structured, and what are the advantages and disadvantages of their choices? By focusing on the existing regulatory choices of the EU, US, and China, we’ll compare and contrast key decisions in terms of classifying AI models and the organization of existing AI governance structures.

What are possible approaches to classify AI systems for governance?  

Before passing any regulations, governments must answer for themselves several challenging, interrelated questions to lay the groundwork for their regulatory strategy: 

Complicating the matter, even precisely defining what is an AI system is challenging: as a field, AI today encompasses many different forms of algorithms and structures. You’ll find overlapping and occasionally conflicting definitions on what constitutes models”, “algorithms”, “AI”, “ML”, and more. In particular, the latest wave of foundational large-language models (LLMs such as ChatGPT) have varying names under different governance structures and contexts, such as “general-purpose AI (GPAI)”, “dual-use foundation models”, “frontier AI models”, or simply “generative AI”. 

For the purposes of this review, we’ll rely on an extremely broad definition of AI systems from IBM: “A program that has been trained on a set of data to recognize certain patterns or make certain decisions without further human intervention.”

There are various viable approaches to classifying the development of AI models or algorithms into “regulatory boxes”. Many of these approaches may overlap with each other, or be layered to form a comprehensive, effective governance strategy. We’ll discuss some of them below:   

Certain regulatory approaches may involve a combination of two or more of these classifications. For example, the US Executive Order identifies a lower compute threshold for mandatory reporting for models trained on biological data, combining compute-level and application-level classifications.

Point of Regulation

Closely tied to this set of considerations is the concept of point of regulation – where in the supply chain governments decide to target their policies and requirements. Governments must identify the most effective regulatory approaches to achieve their objectives, considering factors such as their level of influence and the ease of enforcement at the selected point.

The way AI systems are classified under a government's regulatory framework directly informs the methods they employ for regulation. That is, the classification strategy and the point of regulation are interdependent decisions that shape a government’s overall regulatory strategy for AI.

As an example: 

What are important tradeoffs when designing regulatory structures for AI governance? 

How should a government structure its AI governance, and what factors might it depend on? We’ll mention several relevant considerations that will be further discussed regarding specific government’s approach to legislation.

Centralized vs. Decentralized Enforcement

In a centralized AI governance system, a single agency or regulatory body may be responsible for implementing, monitoring, and enforcing legislation. Such a body may be able to operate more efficiently by consolidating technical expertise, resources, and jurisdiction. For example, a single agency could coordinate more easily with AI labs to design a single framework for regulating multi-functional LLMs, or be able to better fund technically complex safety evaluations by hiring leading safety researchers.

However, such an agency may fail to effectively account for the varied uses of AI technology, or lean too far towards “one-size-fits-all” regulatory strategies. For example, a single agency may be unable to simultaneously effectively regulate use-cases of LLMs in healthcare (e.g. complying with HIPAA regulations), content creation (e.g. preventing deepfakes), and employment (e.g. preventing discriminatory hiring practices), as it may become resource constrained and lack domain expertise. A single agency may also be more susceptible to regulatory capture from AI labs.

In contrast, decentralized enforcement may spread ownership of AI regulation across a variety of agencies or organizations focused on different concerns, such as the domain of application or method of oversight. This approach might significantly improve the application of governance to specific AI use-cases, but risks stretching agencies thin as they struggle to independently evaluate and regulate rapidly-developing technologies. 

Decentralized governmental bodies may not take ownership of novel AI technologies without clear precedent (such as deepfakes), and key issues may “slip between the gaps” of different regulatory agencies. Alternatively, they might alternatively attempt to overfit existing regulatory structures onto novel technologies with disastrous outcomes for innovation. For example, the SEC’s attempt to map emerging cryptocurrencies onto its existing definition of securities has led to it declaring that the majority of cryptocurrency projects are unlicensed securities subject to shutdown.

Vertical vs Horizontal Regulations

A very similar set of arguments can be applied to the regulations themselves. A horizontally-integrated AI governance policy (such as the EU AI Act) applies new legislation to all use cases of AI, effectively forcing any AI models in existence to comply with a wide-ranging and non-specific set of regulations. Such an approach can provide a comprehensive, clearly defined structure for new AI development, simplifying compliance. However, horizontally-integrated policies can also be criticized for “overreaching” in scope, by applying regulations too broadly before legislators have developed expertise in managing a new field, and potentially stifling innovation as a result.

In contrast, vertical regulations may be able to target a single domain of interest precisely, focusing on a narrow domain like “recommendation algorithms”, “deepfakes”, or “text generation” as demonstrated by China’s recent AI regulatory policies. Such vertical regulations can be more straightforward to implement and enforce than a broad set of horizontal regulations, and can allow legislators to concentrate on effectively managing a narrow set of use cases and considerations. However, they may not account effectively for AI technologies that span multiple domains, and could eventually lead to piecemeal, conflicting results as different vertical “slices” take disjointed approaches to regulating AI technologies.

How are leading governments approaching AI Governance? 

China

Over the past three years, China has passed a series of vertical regulations targeting specific domains of AI applications, led by the Cyberspace Administration of China (CAC). The three most relevant pieces of legislation include: 

  1. Algorithmic Recommendation Provisions: Initially published in August 2021, these provisions enforce a series of regulations targeting recommendation algorithms, such as those that provide personalized rankings, search filters, decision making, or “services with public opinion properties or social mobilization capabilities”. Notably, it created a mandatory algorithm registry requiring all qualifying algorithms by Chinese organizations to be registered within 10 days of public launch.
  2. Deep Synthesis Provisions: Initially published in November 2022, this creates a series of regulations regulating the use of algorithms that synthetically generate content such as text, voice, images, or videos. It was intended to combat the rise of “deepfakes”, and requires labeling, user identification, and providers to prevent “misuse” as broadly defined by the Chinese government.
  3. Interim Generative AI Measures: Initially published in July 2023, this set of regulations was a direct response to the announcement and ensuing wave of excitement caused by ChatGPT’s release in late 2022. It expands on the policies proposed in the Deep Synthesis Provisions to better encompass multi-use LLMs, strengthening provisions such as discrimination requirements, requirements for training data, and alignment with national interests.

The language used by these AI regulations is typically broad, high-level, and non-specific. For example, Article 5 of the Interim Generative AI Measures states that providers should “Encourage the innovative application of generative AI technology in each industry and field [and] generate exceptional content that is positive, healthy, and uplifting”. In practice, this wording extends greater control to the CAC, allowing it to interpret its regulations as necessary to enforce its desired outcomes.

Notably, China created the first national algorithm registry in its 2021 Algorithmic Recommendation Provisions, focusing initially on capturing all recommendation algorithms used by consumers in China. By defining the concept of “algorithm” quite broadly, this registry often requires that organizations submit many separate, detailed reports for various algorithms in use by its systems. In subsequent legislation, the CAC has continually expanded the scope of this algorithm registry to include updated forms of AI, including all LLMs and AI models capable of generating content. 

What are key traits of China’s AI governance strategy?

China’s governance strategy is focused on tracking and managing algorithms by their domain of use: 

China is taking a vertical, iterative approach to developing progressively more comprehensive legislation, by passing targeted regulations concentrating on a single type of algorithm at a time:

China strongly prioritizes social control and alignment in its AI regulations: 

China has demonstrated an inward focus on regulating Chinese organizations and citizens: 

The EU

The European Union (EU) has conducted almost all of its AI governance initiatives within a single piece of legislation: the EU AI Act, formally adopted in March 2024. Initially proposed in 2021, this comprehensive legislation aims to regulate AI systems based on their potential risks and safeguard the rights of EU citizens.

At the core of the EU AI Act is a risk-based approach to AI regulation. The act classifies AI systems into four categories: unacceptable risk, high risk, limited risk, and minimal risk. Unacceptable risk AI systems, such as those that manipulate human behavior or exploit vulnerabilities, are banned outright. High-risk AI systems, including those used in critical infrastructure, education, and employment, are subject to strict requirements and oversight. Limited risk AI systems require transparency measures, while minimal risk AI systems are largely unregulated.

In direct response to the publicization of foundational AI models in 2022 starting with the launch of ChatGPT, the Act includes clauses specifically addressing the challenges posed by general purpose AI (GPAI). GPAI systems, which can be adapted for a wide range of tasks, are subject to additional requirements, including being categorized as high-risk systems depending on their intended domain of use.

What are key traits of the EU’s AI governance strategy?

The EU AI Act is a horizontally integrated, comprehensive piece of legislation implemented by a centralized body: 

The EU has demonstrated a clear prioritization for the protection of citizen’s rights: 

The EU AI Act implements strict and binding requirements for high-risk AI systems: 

The US

In large part due to legislative gridlock in the US Congress, the United States has taken an approach to AI governance centered around executive orders and non-binding declarations by the Biden administration. Though this approach has key limitations, such as the inability to allocate budget for additional programs, it has resulted in a significant amount of executive action over the past year. 

Three key executive actions stand out in shaping the US approach: 

  1. US / China Semiconductor Export ControlsLaunched on Oct 7, 2022, these export controls (and subsequent updates) on high-end semiconductors used to train AI models mark a significant escalation in US efforts to restrict China's access to advanced computing and AI technologies. The rules, issued by the Bureau of Industry and Security (BIS), ban the export of advanced chips, chip-making equipment, and semiconductor expertise to China. They aim to drastically slow China's AI development and protect US national security by targeting the hardware essential to develop powerful AI models. 
  2. Blueprint for an AI Bill of Rights: ​​Released in October 2022, this blueprint outlines five principles to guide the design, use, and deployment of automated systems to protect the rights of the American public. These principles include safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives, consideration, and fallback. While non-binding, the blueprint aims to inform policy decisions and align action across all levels of government.
  3. The Executive Order on Artificial Intelligence: Issued in October 2023, this order directs various federal agencies to act to promote the responsible development and use of AI. It calls for these agencies to develop AI risk management frameworks, develop AI standards and technical guidance, create better systems for AI oversight, and foster public-private partnerships. It marks the first comprehensive and coordinated effort to shape AI governance across the federal government, but lacks binding regulation or specific details as it primarily orders individual agencies to publish reports on next steps.

What are key traits of the US’ AI governance strategy?

The US’ initial binding regulations focus on classifying AI models by compute ability and regulating hardware:

Beyond export controls, the US appears to be pursuing a decentralized, largely non-binding approach relying on executive action:

US AI policy is strongly prioritizing its geopolitical AI arms race with China:

0 comments

Comments sorted by top scores.