The new UK government's stance on AI safety

post by Elliot Mckernon (elliot) · 2024-07-31T15:23:59.235Z · LW · GW · 0 comments

Contents

  Previously on The UK’s AI Policy
  What do the new guys say?
  What has the UK AISI and governmental friends been up to? 
    AI Opportunities Unit
    King v Baron
    Reports from the UK AISI and AI Seoul Summit
None
No comments

tl;dr: The new UK government will likely continue to balance encouraging AI innovation for public good against increasing regulation for public safety, with so-far rhetorical calls for stricter regulation than the previous government’s. Several reports have been published by the government and the UK AI Safety Institute, including the latter’s first technical report on model evaluation.

Previously on The UK’s AI Policy

Erstwhile Prime Minister Rishi Sunak took office in October 2022 and quickly announced a suite of new AI policies and plans. Broadly, Sunak's government saw AI as a stonking big opportunity for the UK's economy and society, via becoming a hub of AI development, revolutionizing public services, and providing $1 trillion in value for the UK by 2035. They described their regulatory approach as pro-innovation, calling for government oversight and, eventually, greater requirements on developers of frontier AI. 

Lol

However, Sunak did take AI safety and even existential risk seriously, saying

Get this wrong, and AI could make it easier to build chemical or biological weapons. Terrorist groups could use AI to spread fear and destruction [...] in the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely [through] ‘super intelligence’. [...] I don’t want to be alarmist. And there is a real debate about this [...] But however uncertain and unlikely these risks are, if they did manifest themselves, the consequences would be incredibly serious.

To address these risks, the government organized the first international AI Safety Summit. You can read my summaries of the plans for [LW · GW] and outcomes from [LW · GW] the summit if you want more detail, but briefly, the summit resulted in the international Bletchley Declaration and a promise of ~£700 million over 7 years to the UK AI Safety Institute (née Frontier AI Safety taskforce, and not to be confused with the US or Canadian AI Safety Institutes, with both of whom they are partnered, nor the Japanese or Singaporean AI Safety Institutes, with whom they are not). The UK AISI called for input from labs and research institutes and started work on research topics like AI evaluations that we'll discuss below, with notable advisors and staff such as Yoshua Bengio, Ian Hogarth, Paul Christiano, and Matt Clifford. 

What do the new guys say?

Since then, the dramatic-in-a-British-way 2024 election reduced the tories’ seat count in parliament by two thirds and doubled Labour’s. Rishi Sunak has been replaced by Sir Keir Starmer. 

The new Labour government has indicated that they intend to regulate AI more tightly than the previous Tory government, while still encouraging growth of the AI sector and making use of AI in delivering their national missions

Before the election, Starmer stated the UK should introduce stronger regulation of AI, and Labour’s manifesto promised to introduce "binding regulation on the handful of companies developing the most powerful AI models". Other than planning to ban sexual deepfakes and outlawing nudification, we have little information on what this binding regulation would look like. 

Indeed, some recent statements seem to ape the previous government’s pro-innovation approach:

What has the UK AISI and governmental friends been up to? 

AI Opportunities Unit

On the 26th July, the Secretary of State for Science, Innovation and Technology Peter Kyle stated that AI has enormous potential and that the UK must use AI to support their five national missions, while still developing next steps for regulating frontier AI. To do so:

Note that Kyle previously advocated for compelling AI developers by law to share test results with the UK AISI (rather than relying on existing voluntary sharing), though this hasn’t appeared in rhetoric or policy since.

King v Baron

In July during the King’s Speech, the government committed to legislating powerful AI by placing the UK AISI “on a statutory footing”, providing it with a permanent remit to improve safety while focusing specifically on developers of the most advanced frontier AI, rather than users or, as the EU AI Act does, AI developers more broadly. 

Despite widespread rumours of a fully formed and ready to go AI Bill, King Charles III didn’t mention any such bill. House of Lords member and nine-times-gold medal-winning paralympian, the Right Honourable Baron Holmes of Richmond, plans to re-introduce his proposed AI bill to the house. 

Reports from the UK AISI and AI Seoul Summit

The AISI released three reports in May this year:

If you’re interested in a more in-depth analysis of existing AI regulations in the EU, China, and the US, check out our 2024 State of the AI Regulatory Landscape report. 

0 comments

Comments sorted by top scores.