Regulate or Compete? The China Factor in U.S. AI Policy (NAIR #2)
post by charles_m (charles-martinet) · 2023-05-05T17:43:42.417Z · LW · GW · 1 commentsThis is a link post for https://navigatingairisks.substack.com/p/regulate-or-compete-china-factor-us-ai-policy
Contents
Summary Introduction I. Why regulate? a. Regulate to promote and shape innovation b. Regulate because increasingly powerful systems are rendering existing laws insufficient c. Regulate to better regulate d. Regulate to avoid large-scale accidents e. Regulate to facilitate adoption f. Don't regulate? II. Can China surpass the US? a. China has ambitious goals and is putting in the means to achieve them b. China currently remains behind the US c. China's autocratic advantage is overestimated d. China doesn’t prioritize AI leadership at all costs e. Innovation is insufficient by itself to get ahead f. The US has a well-developed network of alliances it can use to its benefit Conclusion None 1 comment
This post was published as part of the 2nd edition of Navigating AI Risks, a newsletter about the governance of transformative AI risks written by @simeon_c [LW · GW], @Henry Papadatos [LW · GW], and me. Feel free to submit your feedback on the newsletter here.
Summary
The supposed trade-off between regulation and technology leadership is widely put forward as an argument for avoiding regulation. But currently, preservation by the US of its AI leadership over China need not come at the cost of effective AI governance.
In the context of US-China strategic competition, two key variables define policymakers' risk-benefit analysis regarding AI regulation:
- the desirability of AI regulation and its impact on innovation capacity
- whether the United States is on course to maintain (or increase) its AI lead over China
Here are my observations on the first variable (desirability of AI regulation):
- Regulation can promote and shape innovation
- Increasingly powerful systems are rendering existing laws insufficient, and regulation helps avoid large-scale accidents
- Regulation can foster better regulation
- Regulation helps facilitate adoption
Here are my observations on the second variable (likelihood that the US maintains its AI lead over China):
- China has ambitious goals and is putting in the means to achieve them
- China currently remains behind the US
- China's autocratic advantage is overestimated
- China doesn’t prioritize AI leadership at all costs
- Innovation is insufficient by itself to get ahead
- The US has a well-developed network of alliances it can use to its benefit
I conclude that (i) the United States' AI lead over China and (ii) the risks posed by AI seem large enough to set up guardrails, even if it stifles innovation. The United States doesn’t have to choose between maintaining AI leadership over China and hardwiring both safety procedures and democratic values into its AI governance ecosystem. It should do both.
Introduction
The modern story of tech regulation in the United States is a story about China. Worries that regulation may damage the US's technology leadership vis-à-vis China have often been cited as a reason not to regulate. This concern played an important role in debates on privacy rules[1], breaking up Big Tech companies[2], and now AI[3].
This trade-off between regulation and technological preeminence seems obvious. But the situation is more complicated. The preservation of AI leadership over China need not come at the cost of effective AI governance.
There are two key variables at play here:
- The desirability of regulation and its impact on innovation: Are the risks posed by AI significant enough to justify additional regulation? Will such regulations impede innovation[4]?
- Whether the United States is on course to maintain (or increase) its lead over China: Is the US AI lead over China large enough to provide sufficient reassurance that (innovation-stifling) regulation would not place them at a competitive disadvantage?
These multifaceted and critical questions merit thorough consideration. Is it justifiable for the US to avoid regulating AI because that may allow China to become the world's leading AI powerhouse? Here are some tentative observations:
I. Why regulate?
a. Regulate to promote and shape innovation
In the 1960s, the automotive industry saw “a host of new or stronger safety requirements [that] led — often after stiff opposition — to new technologies like airbags, antilock brakes, electronic stability control and, recently, rearview cameras and automatic braking.”[5]
A similar dynamic[6] was at work when liability for fraudulent credit card transactions shifted from consumers to payment networks in the 1970s[7]. Innovation is not unidirectional; regulation can help shape it in desirable directions[8].
b. Regulate because increasingly powerful systems are rendering existing laws insufficient
The increasing performance, diverse abilities, and potential for high consequences misuse of general-purpose AI systems makes it increasingly urgent to put in place guardrails on the underlying model (e.g. GPT-4) and the company that develops it (OpenAI), rather than only its specific applications (ChatGPT), or use cases (writing an essay[9]), which are often already regulated by existing laws[10].
A broad range of stakeholders now agree that the pace of AI progress is creating a real need for guardrails (including the CEO of the world’s leading AI lab, OpenAI[11]). Many of these calls come from a growing recognition that a race to the bottom is undesirable. When do the risks posed by frontier general-purpose AI become large enough to warrant regulation, even if it stifles innovation?
c. Regulate to better regulate
Mandating algorithmic audits, or at least reporting and transparency requirements [12] would allow policymakers more insights into how these models work, and use these insights to create carefully targeted regulation that minimize AI risks while seizing its benefits[13]. These procedures will have a minimal impact on the pace of research and development.
d. Regulate to avoid large-scale accidents
An increasing number of experts worry about large-scale accidents, going up to human extinction. Microsoft’s Bing threatening users is a low-stakes example of an issue called “goal misgeneralization”, when an AI “system competently pursues the wrong goal”. This challenge is just one of many obstacles in making advanced systems do what we want them to do. Much ink has already been spilled by the AI safety community on the topic, but progress remains slow.
Many worry that if companies trying to develop what AI investor Ian Hogarth calls “God-like AI”, are not able to avoid current accidents, they will almost certainly not be able to solve future, potentially existential issues that arise with more powerful systems (which may come sooner than we think, warns MIT professor Max Tegmark).
e. Regulate to facilitate adoption
People and companies won’t widely adopt AI until they trust that advanced systems are not biased, unsafe, or inaccurate. As shown by the wide array of failed AI model releases[14], commercial incentives don’t seem to be enough to ensure sustainable efforts on making systems aligned with ethical principles.
Targeted regulation can also help avoid disasters, and thus ensure long-term public buy-in for the technology. For instance, insufficiently enforced safety rules were a key reason why the 2011 tsunami in Japan led to an incident at the Fukushima power plant[15]. The disaster led to public backlash and dealt a devastating blow to the country's nuclear industry.
f. Don't regulate?
Setting uniform rules for both small models like Stable Diffusion or BERT, and very large models like the one behind ChatGPT, could lead to excessive regulation of models with minimal risks and stifle innovative advances[16]. Badly designed regulations can lead companies to avoid pursuing potentially beneficial & innocuous AI advancements to avoid legal repercussions. That could be net harmful for the industry's competitiveness and citizens' welfare alike. But these are features of ill-conceived regulations, not regulation in general.
So: there’s a growing consensus that we need guardrails to prevent the misuse and accident risks of increasingly powerful AI systems; and regulation can promote both AI adoption and innovation. Still, the prospect of China gaining a competitive advantage might not convince everyone that regulation would be a net good for the American public and national interests. Would it be convincing if China still had a lot to do before it catches up?
II. Can China surpass the US?
a. China has ambitious goals and is putting in the means to achieve them
China seems determined — its 2017 New Generation AI Development Plan set the goal of becoming the 'major AI innovation centre in the world' by 2030 — and dedicates huge sums of public spending to that end. The country has well-developed science & technology intelligence and tech transfer strategies. Its population size allows the country’s government and companies to access a large amount of data, and its large domestic market creates wide opportunities to commercialize AI products and services. Chinese citizens are also enthusiastic about AI, which may facilitate adoption; 78% of them agree that “products and services using AI have more benefits than drawbacks,” compared with 35% of US citizens.
Overall, China has come a long way and is now an innovation and science superpower. Does it mean it’s close to the US in leading edge AI, though?
b. China currently remains behind the US
Assessments by AI researchers[17], investors[18], and CEOs[19] generally see Chinese firms as behind US firms, often by a lot. Most national AI innovation and commercialization indexes show the United States ahead[20], especially on two crucial foundations of AI innovation, talent[21], and investment[22].
Crucially, US export controls on advanced AI chips all but guarantee that China won’t have access to the semiconductors it needs to train the next generation of AI systems[23]. As a (not necessarily anecdotal) example, ChatGPT’s most performant alternative in China is simply subpar.
c. China's autocratic advantage is overestimated
The characteristics of China’s political system may also afford it structural advantages[24]. AI and autocracy may be mutually reinforcing, creating a synergy whereby “new technology bolsters autocratic power, and autocratic demand for technology stimulates innovation across sectors” (AI-autocracy mutual reinforcement argument). In addition to its large population, the Chinese government has access to the industrial and personal data harvested by the private sector. In turn, it may be easier for China to access more data than democracies (data advantage argument[25]).
It seems like these 2 arguments apply mostly to facial recognition and other AI fields that underpin surveillance; less so to other segments of the AI industry[26]. Furthermore, only 1.4% of the Internet is in Chinese - compared with 60.4% in English. Because the Internet is now the dataset for the most powerful models, useful Chinese language texts may be a bottleneck.
d. China doesn’t prioritize AI leadership at all costs
A 2022 regulation requires companies to submit security assessments of recommendation algorithms to the government and to disclose the datasets used; its proposed generative AI rules would impose stringent content restrictions coming from ChatGPT-like applications, and make AI companies liable for this.
In 2021, Xi Jinping's regulatory overhaul of China's tech industry resulted in a total loss of $1 trillion in total market capitalization. Journalist Ezra Klein said it well: “China seems perfectly willing to cripple the development of general A.I. so it can concentrate on systems that will more reliably serve state interests”.
e. Innovation is insufficient by itself to get ahead
Research and development is not the only important factor in AI competition. The diffusion of AI throughout the economy and society is another key pillar of technological power. Simply having a lead in research and development doesn't guarantee long-term success.
You also need a sufficiently large pool of skilled engineers capable of deploying the latest innovations across sectors of the economy. Seen through this prism, “China is far from being a science and technology superpower” compared with the United States.
f. The US has a well-developed network of alliances it can use to its benefit
Most other AI powerhouses, such as the UK, Japan, or Israel, work closely with the US. In comparison, China is more isolated. The US could use its network to coordinate on export controls, conduct international AI R&D projects, or pool its resources.
Even if China came dangerously close, the US would have many ways to act quickly to maintain its lead[27]. There are other, more effective ways than an unregulated AI sector to delay China’s lead and grow the US’ own[28].
Conclusion
The US hasn’t lost its edge. America may believe to be in a tight race, when it is not. Its AI lead seems large enough to take the time to set up guardrails around this emerging technology. Even if it stifles innovation (which can very well be avoided if carefully designed[29]), regulation is needed to mitigate AI risks.
‘Regulate or compete?’ may be a false dilemma. The United States doesn’t have to choose between maintaining AI leadership over China and hardwiring both safety procedures and democratic values into its AI governance ecosystem. It should do both.
This post was written with the help of my two brilliant collaborators, Simeon Campos and Henry Papadatos. See the original version here, along with a recap of recent developments in AI policy and industry, and reading recommendations.
- ^
During a hearing before the U.S. Congress on the topic of data privacy rules, then-Facebook CEO Mark Zuckerberg said that “we still need to make it so that American companies can innovate in areas” like facial recognition, “or else we’re going to fall behind Chinese competitors”. As documented by the AI Now Institute, a think-tank argued in 2021 that the “U.S. Congress should ensure any change to federal data privacy legislation does not limit data collection and use of AI.”
- ^
In a 2021 white paper entitled ‘National Security Issues posed by House Antitrust Bills’, the CCIA, a trade association composed of members such as Google, Amazon, Uber, Meta, or Apple, cited “Undermining U.S. tech leadership” as a key risk of those proposed antitrust bills.
- ^
Notably by former Google CEO Eric Schmidt, who is influential in both tech industry and national security circles, as reported by Protocol: “‘Why don’t we wait until something bad happens and then we can figure out how to regulate it — otherwise, you’re going to slow everybody down. Trust me, China is not busy stopping things because of regulation. They’re starting new things,’ he said during an interview last year. Schmidt reiterated his antiregulation stance in October when the White House unveiled a nonbinding ‘Blueprint for an AI Bill of Rights.’”
- ^
This point was hinted at by US senator Chuck Schumer, who recently came forward with plans to regulate AI: “We don’t want to let China get ahead of us, but at the same time, we’ve got to make sure there’s safety and protection.”
- ^
A book on automotive safety in the 1960s had a large impact in terms of public and policy perception. A few years later, the US government created the National Highway Traffic Safety Administration: “The book had a seminal effect,” Robert A. Lutz, who was a top executive at BMW, Ford Motor, Chrysler and General Motors, said in a telephone interview. “I don’t like Ralph Nader and I didn’t like the book, but there was definitely a role for government in automotive safety.” Maybe, in a few years, we’ll see this in the news: “The letter had a seminal effect,” said Sam Altman, CEO of OpenAI. “I don’t like the Future of Life Institute and I didn’t like the open letter, but there was definitely a role for government in AI safety.”
- ^
New York University professor Gary Marcus has more examples: “Regulation is NOT always bad for technology, e.g., regulation around the environmental impact of cars has spurred advances in electric cars; rising fuel standards have had a positive impact as well. A 1966 US Army restriction on fixed-wing aircraft for airlifts presumably inspired innovation in helicopters, etc.”
- ^
“In the early 1970s, the fledgling credit card industry routinely and shortsightedly held cardholders liable for fraudulent transactions, even if their cards had been lost or stolen. In response, Congress passed the 1974 Fair Credit Billing Act to limit cardholder liability. This protection increased public trust in the new payment system and spurred growth and innovation. Because they could no longer just pass fraud losses on to cardholders, payment networks devised one of the first commercial applications of neural networks to detect out-of-pattern card usage and reduce their fraud losses”. (source)
- ^
Shaping the direction of innovation is a key element of ‘differential technological development’, a strategy where risky or harmful technologies are delayed compared with risk-reducing or otherwise beneficial technologies. For example, higher carbon taxes lead to “lower levels of innovation in high-emissions technologies and higher levels of innovation in green technologies” (Differential technology development: A responsible innovation principle for navigating technology risks)
- ^
Ezra Klein talks about “alignment risk”, “the danger that what we want the systems to do and what they will actually do could diverge, and perhaps do so violently. Curbing alignment risk requires curbing the systems themselves, not just the ways we permit people to use them”.
- ^
Branches of law that apply to AI systems are numerous; they include intellectual property law, privacy law, internet law, product liability law, etc. Recently, officials from several U.S. government departments agencies, including the Federal Trade Commission and the Department of Justice released an announcement reiterating that “Existing legal authorities apply to the use of automated systems and innovative new technologies.”
- ^
In March 2023, CEO Sam Altman emphasized in an interview that “Regulation will be critical and will take time to figure out” and tweeted “we definitely more regulation on AI”
- ^
Or other mechanisms such as model cards, datasheets, or reward reports.
- ^
- ^
This includes the following: “When Meta released its large language model BlenderBot 3 in August 2022, it immediately faced problems of making inappropriate and untrue statements. Meta’s Galactica was only up for three days in November 2022 before it was withdrawn after it was shown confidently ‘hallucinating’ (making up) academic papers that didn’t exist. Most recently, in February 2023, Meta irresponsibly released the full weights of its latest language model, LLaMA. As many experts predicted would happen, it proliferated to 4chan, where it will be used to mass-produce disinformation and hate.”
- ^
An independent Commission found that “the causes of the accident had been foreseeable, and that the plant operator had failed to meet basic safety requirements such as risk assessment [or] preparing for containing collateral damage“. The Prime Minister of Japan at the time said the disaster “laid bare a host of an even bigger man-made vulnerabilities in Japan's nuclear industry and regulation, from inadequate safety guidelines to crisis management, all of which he said need to be overhauled.“ Another example is the low passenger numbers following the 2003 Concorde crash, a key reason why the airliner was retired. And this, despite the otherwise perfect record of the Concorde, with 0 deaths in the more than 20 years that preceded the crash.
- ^
This is why it’s extremely important to have a very precise definition that only targets the riskiest models.
- ^
“Training complex AI systems is not easy. OpenAI is ahead of its US competitors (including Google and Meta), and developers in China and other countries also lag behind. It’s unlikely that “rogue groups” or governments will surpass GPT-4’s capabilities in the foreseeable future. Most AI talent, knowledge, and computing infrastructure is concentrated in a handful of top [US] labs,” says AI researcher Rodolfo Ocampo.
- ^
“US and US-allied sanctions on advanced semiconductors, in particular the next generation of Nvidia hardware needed to train the largest AI systems, mean China is not likely in a position to race ahead of DeepMind or OpenAI,” says AI investor Ian Ogarth.
- ^
According to Conor Leahy, CEO of AI lab Conjecture, Chinese tech giants are irrelevant and probably won't catch up with US Big Tech Companies and large AI labs.
- ^
See, among others, the Emerging Technology Observatory’s Country Activity Tracker, the Stanford Institute for Human-Centered AI’s Global AI Vibrancy Tool, and ASPI’s Critical Technology Tracker. Keep in mind, however, that indexes are very imprecise tools for making such complex assessments (pp. 26-27). They also bring to light the zero-sum aspect of international tech interdependence, not its positive-sum aspects.
- ^
According to Paul Scharre, director of studies at the Center for a New American Security, “China produces the most top AI scientists in the world. More researchers publishing in top AI conferences come from China than any other country. But they don’t stay in China. Over half of China’s best undergraduates studying AI come to the U.S. for graduate school. And they stay. Over 90 percent of Chinese undergraduates who move to the U.S. for their PhD stay in the U.S. after graduation. The biggest beneficiary of Chinese talent isn’t China—it’s the United States.”
- ^
It seems likely that this will remain the case, especially since the US will soon restrict investments from US companies in China in certain sectors, including AI (US investors accounted for ~1/5 of investments in Chinese AI companies from 2015 to 2021).
- ^
In the long-term, however, export controls might incentivize the creation of a US-free semiconductor supply chain and lead China to double down on its self-sufficiency agenda. See CSET’s report on ‘Decoupling in Strategic Technologies’ for historical lessons on export controls and their implications for AI.
- ^
China may have other advantages because of its autocratic nature, though these arguments rest on more tenuous ground. Some see the ability of autocracies to plan long-term and to control the direction of the economy as an advantage in competition for AI. Others argue that collectivist cultures like China are by nature less inventive than individualist ones like in the west. Finally, other structural features, this time less related to political regime type, are that the Chinese economy faces some dampening economic prospects, including because of its aging population, as well as housing and debt situations.
- ^
It may also have a ‘deployment advantage’, as the country faces fewer constraints than democracies in deciding which technologies get rolled out and for what purpose.
- ^
Per Paul Scharre: “Data on facial recognition doesn’t necessarily help you in other areas. One of the arguments in favor of China’s alleged data advantage is that China is a larger country with a much larger user base. That’s true. But what probably matters much more is the user base that tech companies have, and U.S. tech companies have global reach. [...] The conclusion is counter to what a lot of people might initially think — I don’t think that China has a data advantage”; furthermore, certain technical advancements presented by Tim Hwang that reduce the importance of data for ML suggest that these advantages may be eroded.
- ^
Though this is not a shopping list, and the first- and second-order consequences of implementing such policies have to be carefully considered, this includes “export and import controls, inbound and outbound investment restrictions, telecommunications and electronics licensing regimes, visa bans, financial sanctions, technology transaction rules, federal spending limits,” most of which can have immediate effects on China’s AI industry.
- ^
Similarly to the point made above that regulation can help influence the direction of innovation, one such way to shift the dynamics that underpin US-China tech competition is through Tim Hwang’s 'terrain strategy', which consists in shaping “the direction of [AI] to provide structural advantages to itself and other democracies. This effort involves accelerating the development of certain areas within machine learning (ML)—the core technology driving the most dramatic advances in AI—to alter the global playing field.”
- ^
For sure, there remain some barriers to beneficial and robust AI regulations. For one, technical standards are not yet operational and/or designed. The pace of technological progress is a constant challenge for policy, as EU policymakers are experiencing as generative AI challenges the rules of its AI Act. As lamented by US sociobiologist E.O. Wilson, humans have “Paleolithic emotions, medieval institutions, and god-like technology.” But several sound policy proposals would help update the legal environment within which AI operates. Ezra Klein’s set of key issues to resolve is a good roadmap, as are policy recommendations by the AI Now Institute or the Future of Life Institute.
1 comments
Comments sorted by top scores.
comment by BrooksT · 2023-05-05T19:45:52.531Z · LW(p) · GW(p)
Some interesting points, but many don't seem to connect to the conclusion. For instance, were the car safety regulations of the 1960's really instrumental to adoption of autos? Is the AI market today at a similar state of maturity as the auto market in the 1960's? Maybe? Was there a similar international competition? Well, yes, but the Japanese carmakers won in the 70's, so if this is an analog, does it really support the conclusion?