UK Government publishes "Frontier AI: capabilities and risks" Discussion Paper
post by A.H. (AlfredHarwood) · 2023-10-26T13:55:16.841Z · LW · GW · 0 commentsThis is a link post for https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper
Contents
No comments
Ahead of next week's AI Safety Summit, the UK government has published a discussion paper on the capabilities and risks of AI. The paper comes in three parts and can be found here. See here for the press release announcing the publication of the paper. The paper has been reviewed by a panel of experts including Yoshua Bengio and Paul Christiano.
I haven't read it all, but it looks like like a pretty good primer on AI risk. The first part 'Capabilities and risks from frontier AI: discussion paper' is the main overview and it is followed by two 'Annexes': Annex A 'Future risks of frontier AI' and Annex B 'Safety and security risks of generative artificial intelligence to 2025'. Predictably, there is a lot of discussion of the basics of AI risk and non-catastrophic risks such as labour market disruption/disinformation/bias but catastrophic risk does get a mention, often with the caveat that the subject is 'controversial'.
Here are some quotes I found after a quick skim:
On AI companies racing to the bottom regarding safety:
Individual companies may not be sufficiently incentivised to address all
the potential harms of their systems. In recent years there has been an intense competition between AI developers to build products quickly. Competition on AI has raised concern about potential “race to the bottom” scenarios, where actors compete to rapidly develop AI systems and under-invest in safety measures. In such scenarios, it could be challenging even for AI developers to commit unilaterally to stringent safety standards, lest their commitments put them at a competitive disadvantage. The risks from this “race” dynamic will be exacerbated if it is technologically feasible to maintain or even accelerate the recent rapid
pace of AI progress.
On losing control of AI:
Humans may increasingly hand over control of important decisions to AI systems, due to economic and geopolitical incentives. Some experts are concerned that future advanced AI systems will seek to increase their own influence and reduce human control, with potentially catastrophic consequences - although this is contested.
...
The likelihood of these risks remains controversial, with many experts thinking the likelihood is very low and some arguing a focus on risk distracts from present harms. However, many experts are concerned that losing control of advanced general-purpose AI systems is a real possibility and that loss of control could be permanent and catastrophic.
On loss of control of AI as a catastrophic risk:
As discussed earlier in the report, while some experts believe that highly capable general-purpose AI agents might be developed soon, others are sceptical it will ever be possible. If this does materialise such agents might exceed the capabilities of human experts in domains relevant to loss of control, for example political strategy, weapons design, or self-improvement. For loss of control to be a catastrophic risk, AI systems would need to be given or gain some control over systems with significant impacts, such as military or financial systems. This remains a hypothetical and hotly disputed risk.
0 comments
Comments sorted by top scores.