How are voluntary commitments on vulnerability reporting going?
post by Adam Jones (domdomegg) · 2024-02-22T08:43:56.996Z · LW · GW · 1 commentsThis is a link post for https://adamjones.me/blog/ai-vulnerability-reporting/
Contents
Summary table Some high-level takeaways None 1 comment
This article is written solely in my personal capacity, and does not represent the views of any organisations I am affiliated with.
The UK and US governments have both secured voluntary commitments from many major AI companies on AI safety.[1]
These include having appropriate reporting mechanisms for both cybersecurity vulnerabilities and model vulnerabilities[2].
I took a look at how well organisations are living up to these commitments as of February 2024. This included reviewing what the processes actually are, and submitting test reports to see if they work.
Summary table
Company | Score |
---|---|
Adobe 🇺🇸 | 6/20 |
Amazon 🇺🇸 🇬🇧 | 6/20 |
Anthropic 🇺🇸 🇬🇧 | 13/20 |
Cohere 🇺🇸 | 12/20 |
Google 🇺🇸 | 20/20 |
Google DeepMind 🇬🇧 | 18/20 |
IBM 🇺🇸 | 5/20 |
Inflection 🇺🇸 🇬🇧 | 16/20 |
Meta 🇺🇸 🇬🇧 | 14/20 |
Microsoft 🇺🇸 🇬🇧 | 18/20 |
NVIDIA 🇺🇸 | 20/20 |
OpenAI 🇺🇸 🇬🇧 | 12/20 |
Palantir 🇺🇸 | 5/20 |
Salesforce 🇺🇸 | 9/20 |
Scale AI 🇺🇸 | 4/20 |
Stability AI 🇺🇸 | 1/20 |
Some high-level takeaways
Performance was quite low across the board. Simply listing a contact email and responding to queries would score 17 points, which would place a company in the top five.
However, a couple companies have great processes that can act as best practice examples. Both Google and NVIDIA got perfect scores. In addition, Google offers bug bounty incentives for model vulnerabilities and NVIDIA had an exceptionally clear and easy to use model vulnerability contact point.
Companies did much better on cybersecurity than model vulnerabilities. Additionally, companies that combined their cybersecurity and model vulnerability procedures scored better. This might be because existing cybersecurity processes are more battle tested, or taken more seriously than model vulnerabilities.
Companies do know how to have transparent contact processes. Every single company's press contact could be found within minutes, and was a simple email address. This suggests companies are able to sort this out when there are greater commercial incentives to do so.
For more details, see the full post.
- ^
In the US, all companies agreed to one set of commitments which included:
US on model vulnerabilities: Companies making this commitment recognize that AI systems may continue to have weaknesses and vulnerabilities even after robust red-teaming. They commit to establishing for systems within scope bounty systems, contests, or prizes to incent the responsible disclosure of weaknesses, such as unsafe behaviors, or to include AI systems in their existing bug bounty programs.
In the UK each company submitted their own commitment wordings. The government described the relevant areas as follows:
UK on cybersecurity: Maintain open lines of communication for feedback regarding product security, both internally and externally to your organisation, including mechanisms for security researchers to report vulnerabilities and receive legal safe harbour for doing so, and for escalating issues to the wider community. Helping to share knowledge and threat information will strengthen the overall community's ability to respond to AI security threats.
UK on model vulnerabilities: Establish clear, user-friendly, and publicly described processes for receiving model vulnerability reports drawing on established software vulnerability reporting processes. These processes can be built into – or take inspiration from – processes that organisations have built to receive reports of traditional software vulnerabilities. It is crucial that these policies are made publicly accessible and function effectively.
- ^
A model vulnerability is a safety or security issue relating to an AI model that isn't directly related to its cybersecurity. This could include vulnerability to jailbreaks, prompt injection attacks, privacy attacks, unaddressed potential for misuse, controllability issues, data poisoning attacks, bias and discrimination and general performance issues.
This is based on the definition in the UK government's paper.
1 comments
Comments sorted by top scores.
comment by Zach Stein-Perlman · 2024-02-22T09:13:15.408Z · LW(p) · GW(p)
Good work.
One plausibly-important factor I wish you'd tracked: whether the company offers bug bounties.