Take SCIFs, it’s dangerous to go alone
post by latterframe, Jeffrey Ladish (jeff-ladish), schroederdewitt · 2024-05-01T08:02:38.067Z · LW · GW · 1 commentsContents
Mistral model leak Potential industry response SCIFs / secure reading rooms Locked-down laptops Side benefits Conclusion Further reading None 1 comment
Coauthored by Dmitrii Volkov, Christian Schroeder de Witt, Jeffrey Ladish (Palisade Research, University of Oxford).
We explore how frontier AI labs could assimilate operational security (opsec) best practices from fields like nuclear energy and construction to mitigate near-term safety risks stemming from AI R&D process compromise. Such risks in the near-term include model weight leaks and backdoor insertion, and loss of control in the longer-term.
We discuss the Mistral and LLaMA model leaks as motivating examples and propose two classic opsec mitigations: performing AI audits in secure reading rooms (SCIFs) and using locked-down computers for frontier AI research.
Mistral model leak
In January 2024, a high-quality 70B LLM leaked from Mistral. Reporting suggests the model leaked through an external evaluation or product design process. That is, Mistral shared the full model with a few other companies and one of their employees leaked the model.
Then there’s LLaMA which was supposed to be slowly released to researchers and partners, and leaked on 4chan a week after the announcement[1], sparking a wave of open LLM innovation.
Potential industry response
Industry might respond to incidents like this[2] by providing external auditors, evaluation organizations, or business partners with API access only, maybe further locking it down with query / download / entropy limits to prevent distillation.
This mitigation is effective in terms of preventing model leaks, but is too strong—blackbox AI access is insufficient for quality audits. Blackbox methods tend to be ad-hoc, heuristic and shallow, making them unreliable in finding adversarial inputs and biases and limited in eliciting capabilities. Interpretability work is almost impossible without gradient access.
So we are at an impasse—we want to give auditors weights access so they can do quality audits, but this risks the model getting leaked. Even if eventual leaks might not be preventable, at least we would wish to delay leakage for as long as possible and practice defense in depth. While we are currently working on advanced versions of rate limiting involving limiting entropy / differential privacy budget to allow secure remote model access, in this proposal we suggest something different: importing physical opsec security measures from other high-stakes fields.
SCIFs / secure reading rooms
Aerospace, nuclear, intelligence and other high-stakes fields routinely employ special secure facilities for work with sensitive information. Entering the facility typically requires surrendering your phone and belongings; the facility is sound- and EM-proofed and regularly inspected for any devices left inside; it has armed guards. This design makes it hard to get any data out while allowing full access inside, which fits the audit use case very well.
An emerging field of deep learning cryptography aims to cover some of the same issues SCIFs address; however, scaling complex cryptography to state-of-the-art AI is an open research question. SCIFs are a simple and robust technology that gives a lot of security for a little investment.
Just how little? There are two main costs to SCIFs: maintenance and inconvenience. First, a SCIF must be built and maintained[3]. Second, it’s less convenient for an auditor to work from a SCIF then from the comfort of their home[4].
Our current belief is that SCIFs can easily be cost-effective if placed in AI hubs and universities[5]; we defer concrete cost analysis to future work.
Locked-down laptops
SCIFs are designed to limit unintended information flow: auditors are free to work as they wish inside, but can’t take information stores like paper or flash drives in or out.
A softer physical security measure is to only allow research access from hardened laptops. Such laptops might have no IO options and an extremely limited operating system only concerned with authenticating the user and setting up a tunnel to the company AI research harness.
Similarly to a SCIF, these provide human-bandwidth data access (screen and keyboard) and limit high-bandwidth options: no USB, printer output, HDMI connection, etc., making exfiltration much slower and more error-prone.
If an audit laptop is just a thin terminal for the remote research harness, we could further limit leaks by employing user activity anomaly detection and remotely locking compromised terminals (this is known as Data Loss Prevention in security jargon).
Side benefits
While not the focus of this piece, we note the use of SCIFs could help democratize access to safety-critical research on frontier models, particularly for academic researchers, as secure environments would have to be endowed/linked with the necessary compute hardware to perform audits on state-of-the-art industry models.
Conclusion
We propose importing SCIFs and locked-down computers as AI R&D security practices and motivate this by examples of model weights leaks through human disclosure.
This article has the form of an opinion piece: we aim to spark discussion and get early feedback and/or collaborators, and provide a more rigorous analysis of costs, benefits, stakeholders, and outcomes in a later piece.
Further reading
Securing Artificial Intelligence Model Weights: Interim Report | RAND
Guidelines for secure AI system development - NCSC.GOV.UK
Preventing model exfiltration with upload limits — AI Alignment Forum [AF · GW]
- ^
Some speculate the leak was intentional, a way of avoiding accountability. If true, this calls for stronger legal responsibilities but does not impact our opsec suggestions designed to avoid accidental compromise.
- ^
These incidents are not specifically about leaks from audit access: the Mistral model was shared with business partners for application development. Here we view both kinds of access as essentially weight sharing and thus similar for the following discussion.
- ^
It is likely the community could also coordinate to use existing SCIFs—e.g. university ones.
- ^
How does that affect their productivity? By 20%? By a factor of 2? Factor of 10? We defer this question to future work.
- ^
Temporary SCIFs could also be deployed on demand and avoid city planning compliance.
1 comments
Comments sorted by top scores.
comment by davekasten · 2024-05-02T04:07:58.570Z · LW(p) · GW(p)
It may be worth noting that, at least anecdotally, when you talk about AI development processes with DoD policy people, they assume that SCIFs will be used at some point.
YMMV on whether that's their pattern-matching or hard-earned experience speaking, but I think worth noting.