Urging an International AI Treaty: An Open Letter
post by Olli Järviniemi (jarviniemi) · 2023-10-31T11:26:25.864Z · LW · GW · 2 commentsThis is a link post for https://aitreaty.org/
Contents
2 comments
We call on governments worldwide to actively respond to the potentially catastrophic risks posed by advanced artificial intelligence (AI) systems to humanity, encompassing threats from misuse, systemic risks, and loss of control. We advocate for the development and ratification of an international AI treaty to reduce these risks, and ensure the benefits of AI for all.
[...]
We believe the central aim of an international AI treaty should be to prevent the unchecked escalation of the capabilities of AI systems while preserving their benefits. For such a treaty, we suggest the following core components:
- Global Compute Thresholds: Internationally upheld thresholds on the amount of compute used to train any given AI model, with a procedure to lower these over time to account for algorithmic improvements.
- CERN for AI Safety: A collaborative AI safety laboratory akin to CERN for pooling resources, expertise, and knowledge in the service of AI safety, and acting as a cooperative platform for safe AI development and safety research.
- Safe APIs: Enable access to the APIs of safe AI models, with their capabilities held within estimated safe limits, in order to reduce incentives towards a dangerous race in AI development.
- Compliance Commission: An international commission responsible for monitoring treaty compliance.
Full letter at https://aitreaty.org/.
2 comments
Comments sorted by top scores.
comment by Odd anon · 2023-11-01T06:13:45.385Z · LW(p) · GW(p)
I assume that "threshold" here means a cap/maximum, right? So that nobody can create AIs larger than that cap?
Or is there another possible meaning here?
Replies from: jarviniemi↑ comment by Olli Järviniemi (jarviniemi) · 2023-11-01T12:48:27.489Z · LW(p) · GW(p)
That is my interpretation, yes.