ARIA's Safeguarded AI grant program is accepting applications for Technical Area 1.1 until May 28th
post by Brendon_Wong · 2024-05-22T06:54:55.206Z · LW · GW · 0 commentsThis is a link post for https://www.aria.org.uk/wp-content/uploads/2024/04/ARIA-Safeguarded-AI-TA1.1-Theory-Call-for-proposals.pdf
Contents
No comments
Note: I am completely unaffiliated with ARIA [LW · GW]. I figured I'd post this since applications are closing soon and I didn't see anyone post about this.
My Takeaways:
- ARIA is funding the development of Safeguarded AI [LW · GW] which is an update to and specific implementation of davidad's Open Agency Architecture [LW · GW].
- This grant round is for Technical Area (TA) 1.1, the first phase of developing Safeguarded AI.
- TA1.1 involves developing mathematical and CS "syntax" for "language(s) that can be used by teams of humans (and, later, AI systems) to define 'world models', probabilistic specifications, neural network controllers, and proof certificates" for the ultimate purpose of "defin[ing] safety" to AI systems to ensure that they "are not taking actions that would lead to catastrophic outcomes."
- Grant details:
- This round is expected to distribute up to £3.5M across 10–16 applicants. The round opened on April 11th, 2024 and will close on May 28th, 2024 (at "12:00 BST").
- Funding is primarily intended for UK entities (e.g. a person, company, nonprofit, or university), but non-UK entities can qualify if they can boost the impact of programs in the UK.
- This grant round is focused on supporting open source conceptual/research work, but will also fund people to communicate concepts in TA1.1 to broader audiences.
- The proposal mentions funding learning materials like blog posts, tutorials, and educational videos. This might be an interesting opportunity for people interested in Safeguarded AI to receive funding to learn about and share davidad's ideas.
- The next grant round, for TA1.2, is expected to open later this year.
0 comments
Comments sorted by top scores.