Chinese scientists acknowledge xrisk & call for international regulatory body [Linkpost]

post by Akash (akash-wasil) · 2023-11-01T13:28:43.723Z · LW · GW · 4 comments

This is a link post for https://www.ft.com/content/c7f8b6dc-e742-4094-9ee7-3178dd4b597f

Contents

4 comments

Some highlights from the article (bolding added):

Several Chinese academic attendees of the summit at Bletchley Park, England, which starts on Wednesday, have signed on to a statement that warns that advanced AI will pose an “existential risk to humanity” in the coming decades.

The group, which includes Andrew Yao, one of China’s most prominent computer scientists, calls for the creation of an international regulatory body, the mandatory registration and auditing of advanced AI systems, the inclusion of instant “shutdown” procedures and for developers to spend 30 per cent of their research budget on AI safety.

The proposals are more focused on existential risk than US president Joe Biden’s executive order on AI issued this week, which encompasses algorithmic discrimination and labour-market impacts, as well as the European Union’s proposed AI Act, which focuses on protecting rights such as privacy.

Note that the statement was also signed by several western experts, including Yoshua Bengio.

4 comments

Comments sorted by top scores.

comment by Zach Stein-Perlman · 2023-11-01T19:01:03.891Z · LW(p) · GW(p)

The actual statement: Prominent AI Scientists from China and the West Propose Joint Strategy to Mitigate Risks from AI:

Coordinated global action on AI safety research and governance is critical to prevent uncontrolled frontier AI development from posing unacceptable risks to humanity.

Global action, cooperation, and capacity building are key to managing risk from AI and enabling humanity to share in its benefits. AI safety is a global public good that should be supported by public and private investment, with advances in safety shared widely. Governments around the world — especially of leading AI nations — have a responsibility to develop measures to prevent worst-case outcomes from malicious or careless actors and to rein in reckless competition. The international community should work to create an international coordination process for advanced AI in this vein.

We face near-term risks from malicious actors misusing frontier AI systems, with current safety filters integrated by developers easily bypassed. Frontier AI systems produce compelling misinformation and may soon be capable enough to help terrorists develop weapons of mass destruction. Moreover, there is a serious risk that future AI systems may escape human control altogether. Even aligned AI systems could destabilize or disempower existing institutions. Taken together, we believe AI may pose an existential risk to humanity in the coming decades.

In domestic regulation, we recommend mandatory registration for the creation, sale or use of models above a certain capability threshold, including open-source copies and derivatives, to enable governments to acquire critical and currently missing visibility into emerging risks. Governments should monitor large-scale data centers and track AI incidents, and should require that AI developers of frontier models be subject to independent third-party audits evaluating their information security and model safety. AI developers should also be required to share comprehensive risk assessments, policies around risk management, and predictions about their systems’ behavior in third party evaluations and post-deployment with relevant authorities.

We also recommend defining clear red lines that, if crossed, mandate immediate termination of an AI system — including all copies — through rapid and safe shut-down procedures. Governments should cooperate to instantiate and preserve this capacity. Moreover, prior to deployment as well as during training for the most advanced models, developers should demonstrate to regulators’ satisfaction that their system(s) will not cross these red lines.

Reaching adequate safety levels for advanced AI will also require immense research progress. Advanced AI systems must be demonstrably aligned with their designer’s intent, as well as appropriate norms and values. They must also be robust against both malicious actors and rare failure modes. Sufficient human control needs to be ensured for these systems. Concerted effort by the global research community in both AI and other disciplines is essential; we need a global network of dedicated AI safety research and governance institutions. We call on leading AI developers to make a minimum spending commitment of one third of their AI R&D on AI safety and for government agencies to fund academic and non-profit AI safety and governance research in at least the same proportion.

comment by ryan_b · 2023-11-01T14:32:06.113Z · LW(p) · GW(p)

Hey, that's the first time I've seen safety emphasized as a budgetary commitment. Huzzah for another inch in the overton window!

Replies from: Zach Stein-Perlman
comment by Zach Stein-Perlman · 2023-11-01T22:14:32.537Z · LW(p) · GW(p)

https://managing-ai-risks.com said "we call on major tech companies and public funders to allocate at least one-third of their AI R&D budget to ensuring safety and ethical use"

Replies from: ryan_b
comment by ryan_b · 2023-11-02T14:57:05.448Z · LW(p) · GW(p)

Woo! That's two in the span of one week!