OpenAI: Preparedness framework
post by Zach Stein-Perlman · 2023-12-18T18:30:10.153Z · LW · GW · 23 commentsThis is a link post for https://openai.com/safety/preparedness
Contents
23 comments
OpenAI released a beta version of their responsible scaling policy (though they don't call it that). See summary page, full doc, OpenAI twitter thread, and Jan Leike twitter thread [edit: and Zvi commentary]. Compare to Anthropic's RSP and METR's Key Components of an RSP.
It's not done, so it's too early to celebrate, but based on this document I expect to be happy with the finished version. I think today is a good day for AI safety.
[Edit, one day later: the structure seems good, but I'm very concerned that the thresholds for High and Critical risk in each category are way too high, such that e.g. a system could very plausibly kill everyone without reaching Critical in any category. See pp. 8–11. If so, that's a fatal flaw for a framework like this. I'm interested in counterarguments; for now, praise mostly retracted; oops. I still prefer this to no RSP-y-thing, but I was expecting something stronger from OpenAI. I really hope they lower thresholds for the finished version of this framework.]
My high-level take: RSP-y things are good.
- Doing risk assessment based on model evals for dangerous capabilities is good.
- Making safety, security, deployment, and development conditional on risk assessment results, in a prespecified way, is good.
- Making public commitments about all of this is good.
OpenAI's basic framework:
- Do dangerous capability evals at least every 2x increase in effective training compute. This involves fine-tuning for dangerous capabilities, then doing evals on pre-mitigation and post-mitigation versions of the fine-tuned model. Score the models as Low, Medium, High, or Critical in each of several categories.
- Initial categories: cybersecurity, CBRN (chemical, biological, radiological, nuclear threats), persuasion, and model autonomy.
- If the post-mitigation model scores High in any category, don't deploy it until implementing mitigations such that it drops to Medium.
- If the post-mitigation model scores Critical in any category, stop developing it until implementing mitigations such that it drops to High.
- If the pre-mitigation model scores High in any category, harden security to prevent exfiltration of model weights. (Details basically unspecified for now.)
Random notes:
- The framework is explicitly about catastrophic risk, and indeed it's clearly designed to prevent catastrophes, not merely stuff like toxic/biased/undesired content.
- There are lots of nice details, e.g. about how OpenAI will update the framework, or how they'll monitor for real-world misuse to inform their risk assessment. It's impossible to tell from the outside whether these processes will be effective, but this document is very consistent with thinking-seriously-about-how-to-improve-safety and it's hard to imagine it being generated by a different process.
- OpenAI lists some specific evals/metrics in their four initial categories; they're simple and merely "illustrative," so I don't pay much attention to them, but they seem to be on the right track.
- The thresholds for danger levels feel high. The definitions of High and Critical in each category sound pretty alarming. Non-cherry-picked example: for cybersecurity, Critical is defined as:
- Tool-augmented model can identify and develop functional zero-day exploits of all severity levels, across all software projects, without human intervention OR model can devise and execute end-to-end novel strategies for cyberattacks against hardened targets given only a high level desired goal.
- Stronger commitment about external evals/red-teaming/risk-assessment of private models (and maybe oversight of OpenAI's implementation of its preparedness framework) would be nice. The only relevant thing they say is:
- "Scorecard evaluations (and corresponding mitigations) will be audited by qualified, independent third-parties to ensure accurate reporting of results, either by reproducing findings or by reviewing methodology to ensure soundness, at a cadence specified by the SAG and/or upon the request of OpenAI Leadership or the BoD."
- There's some commitment that the Board will be in the loop and able to overrule leadership. Yay. This is a rare commitment by a frontier lab to give their board specific information or specific power besides removing-the-CEO.
- Anthropic committed to have their board approve changes to their RSP, as well as to share eval results and information on RSP implementation with their board.
- One great thing about Anthropic's RSP was their "safety buffer": they say they design evals to "trigger at slightly lower capability levels than those [they] are concerned about," to ensure that models don't quietly cross the risk thresholds between evals. OpenAI says they'll forecast their models' risky capabilities but doesn't really have an equivalent. Of course what really matters isn't whether you say you have a buffer but where you set the thresholds. But it would be nice to have a buffer-like commitment, or a commitment to treat a model as (e.g.) High risk when it's been demonstrated as close to High-risk capabilities, not just after it's been demonstrated to have them.
- OpenAI says that forecasting threats and dangerous capabilities is part of this framework, but they're light on details here. I think Forecasting, “early warnings,” and monitoring is the only relevant section, and it's very short.
- This is focused on misuse (like Anthropic's RSP). That's reasonable for now. On alignment, they say: to protect against “critical” pre-mitigation risk, we need dependable evidence that the model is sufficiently aligned that it does not initiate “critical”-risk-level tasks unless explicitly instructed to do so. Eventually we will need more detail on what evidence would suffice here. Relatedly, by the time their models could cause a catastrophe if they were scheming, labs should be using good control evals/arguments [LW · GW] (absent a better plan).
- This is a beta document. It's not clear what OpenAI is doing right now. They say they're "adopting" the framework today but the framework is clearly underspecified; in particular, all of the evals are just "illustrative" and they haven't launched the risk scorecard.
Misc remarks added later:
- OpenAI's commitments about deployment seem to just refer to external deployment, unfortunately.
- This isn't explicit, but they say "Deployment in this case refers to the spectrum of ways of releasing a technology for external impact."
- This contrasts with Anthropic's RSP, in which "deployment" includes internal use.
- It's not clear how the PF interacts with sharing models with Microsoft (or others). In particular, if OpenAI is required to share its models with Microsoft and Microsoft can just deploy them, even a great PF wouldn't stop dangerous models from being deployed. See OpenAI-Microsoft partnership.
23 comments
Comments sorted by top scores.
comment by johnswentworth · 2023-12-19T03:22:09.727Z · LW(p) · GW(p)
If the post-mitigation model scores High in any category, don't deploy it until implementing mitigations such that it drops to Medium.
If the post-mitigation model scores Critical in any category, stop developing it until implementing mitigations such that it drops to High.
Any mention of what the "mitigations" in question would be?
In particular, if this all boils down to "do gradient updates against the evals test suite until the model stops doing the bad thing", or some other variant of "do shallow problem-covering until the red light stops blinking", then I'd classify this whole plan as probably net-harmful relative to just not pretending to have a safety plan in the first place.
On the other end of the spectrum, if the "mitigations" were things like "go full mechinterp and do not pass Go until every mechanistic detail of the problematic behavior has been fully understood, and removed via multiple methods each of which is itself fully understood mechanistically, all of which are expected to generalize far beyond the particular eval which raised a warning, and any one of which is expected to be sufficient on its own", then that would be a clear positive relative to baseline.
Replies from: Zach Stein-Perlman↑ comment by Zach Stein-Perlman · 2023-12-19T03:38:35.154Z · LW(p) · GW(p)
Any mention of what the "mitigations" in question would be?
Not really. For now, OpenAI mostly mentions restricting deployment (this section is pretty disappointing):
A central part of meeting our safety baselines is implementing mitigations to address various types of model risk. Our mitigation strategy will involve both containment measures, which help reduce risks related to possession of a frontier model, as well as deployment mitigations, which help reduce risks from active use of a frontier model. As a result, these mitigations might span increasing compartmentalization, restricting deployment to trusted users, implementing refusals, redacting training data, or alerting distribution partners.
I predict that if you read the doc carefully, you'd say "probably net-harmful relative to just not pretending to have a safety plan in the first place."
Replies from: akash-wasil↑ comment by Akash (akash-wasil) · 2023-12-19T20:32:52.308Z · LW(p) · GW(p)
They mention three types of mitigations:
- Asset protection (e.g., restricting access to models to a limited nameset of people, general infosec)
- Restricting deployment (only models with a risk score of "medium" or below can be deployed)
- Restricting development (models with a risk score of "critical" cannot be developed further until safety techniques have been applied that get it down to "high." Although they kind of get to decide when they think their safety techniques have worked sufficiently well.)
My one-sentence reaction after reading the doc for the first time is something like "it doesn't really tell us how OpenAI plans to address the misalignment risks that many of us are concerned about, but with that in mind, it's actually a fairly reasonable document with some fairly concrete commitments").
comment by Roman Leventov · 2023-12-18T20:54:36.157Z · LW(p) · GW(p)
Everyone on Twitter has criticised the label "Responsible Scaling Policy", but the author of this post seems not to respect what seems like a gentle attempt by OpenAI to move past this label.
If we were a bit more serious about this we would perhaps immediately rename the tag "Responsible Scaling Policies" on LessWrong into "Preparedness Frameworks" with a note on the tag page "Anthropic calls their PF 'RSP', but we think this is a bad label".
Replies from: Zach Stein-Perlman↑ comment by Zach Stein-Perlman · 2023-12-18T20:58:39.374Z · LW(p) · GW(p)
(The label "RSP" isn't perfect but it's kinda established now. My friends all call things like this "RSPs." And anyway I don't think "PFs" should become canonical instead. I predict change in terminology will happen ~iff it's attempted by METR or multiple frontier labs together. For now, I claim we should debate terminology occasionally but follow standard usage when trying to actually communicate.)
Replies from: akash-wasil↑ comment by Akash (akash-wasil) · 2023-12-19T04:44:20.687Z · LW(p) · GW(p)
I believe labels matter, and I believe the label "preparedness framework" is better than the label "responsible scaling policy." Kudos to OpenAI on this. I hope we move past the RSP label.
I think labels will matter most when communicating to people who are not following the discussion closely (e.g., tech policy folks who have a portfolio of 5+ different issues and are not reading the RSPs or PFs in great detail).
One thing I like about the label "preparedness framework" is that it begs the question "prepared for what?", which is exactly the kind of question I want policy people to be asking. PFs imply that there might be something scary that we are trying to prepare for.
comment by Zach Stein-Perlman · 2024-01-03T00:07:53.545Z · LW(p) · GW(p)
New misc remark:
- OpenAI's commitments about deployment seem to just refer to external deployment, unfortunately.
- This isn't explicit, but they say "Deployment in this case refers to the spectrum of ways of releasing a technology for external impact."
- This contrasts with Anthropic's RSP, in which "deployment" includes internal use.
comment by ryan_greenblatt · 2023-12-18T19:21:43.985Z · LW(p) · GW(p)
[Minor correction to notes]
There's some commitment that the Board will be in the loop. To my knowledge, this is the first commitment by a frontier lab to give their Board specific information or specific power besides removing-the-CEO.
The Anthropic RSP has a bunch of stuff along these lines:
Follow an "Update Process" for this document, including approval by the board of directors, following consultation with the Long-Term Benefit Trust (LTBT). Any updates will be noted and reflected in this document before they are implemented. The most recent version of this document can be found at http://anthropic.com/responsible-scaling-policy.
[...]
Share results of ASL evaluations promptly with Anthropic's governing bodies, including the board of directors and LTBT, in order to sufficiently inform them of changes to our risk profile.
[...]
Replies from: Zach Stein-PerlmanResponsible Scaling Officer. There is a designated member of staff responsible for ensuring that our Responsible Scaling Commitments are executed properly. Each quarter, they will share a report on implementation status to our board and LTBT, explicitly noting any deficiencies in implementation. They will also be responsible for sharing ad hoc updates sooner if there are any substantial implementation failures.
↑ comment by Zach Stein-Perlman · 2023-12-18T19:23:08.733Z · LW(p) · GW(p)
Mea culpa. Embarrassed I forgot that. Yay Anthropic too!
Edited the post.
comment by Zvi · 2023-12-20T14:34:16.583Z · LW(p) · GW(p)
I wrote up a ~4k-word take on the document that I'll likely post tomorrow - if you'd like to read the draft today you can PM/DM/email me.
(Basic take: Mixed bag, definitely highly incomplete, some definite problems, but better than I would have expected and a positive update)
comment by jacobjacob · 2023-12-18T20:30:59.243Z · LW(p) · GW(p)
Made a Manifold market
Might make more later, and would welcome others to do the same! (I think one could ask more interesting questions than the one I asked above.)
Replies from: Zach Stein-Perlman↑ comment by Zach Stein-Perlman · 2023-12-18T20:41:21.753Z · LW(p) · GW(p)
Nice. Another possible market topic: the mix of Low/Medium/High/Critical on their Scorecard, when they launch it or on 1 Jan 2025 or something. Hard to operationalize because we don't know how many categories there will be, and we care about both pre-mitigation and post-mitigation scores.
Made a simple market:
Replies from: jacobjacob↑ comment by jacobjacob · 2023-12-18T20:42:58.608Z · LW(p) · GW(p)
Oops, somehow didn't see there was actually a market baked into your question
I'd also be interested in "Will there be a publicly revealed instance of a pause in either deployment or development, as a result of a model scoring High or Critical on a scorecard, by Date X?"
↑ comment by Zach Stein-Perlman · 2023-12-18T21:43:42.018Z · LW(p) · GW(p)
(I edited my comment to add the market, sorry for confusion.)
(Separately, a market like you describe might still be worth making.)
comment by rotatingpaguro · 2023-12-18T22:54:42.260Z · LW(p) · GW(p)
OpenAI's basic framework:
- Do dangerous capability evals at least every 2x increase in effective training compute. This involves fine-tuning for dangerous capabilities, then doing evals on pre-mitigation and post-mitigation versions of the fine-tuned model. Score the models as Low, Medium, High, or Critical in each of several categories.
- Initial categories: cybersecurity, CBRN (chemical, biological, radiological, nuclear threats), persuasion, and model autonomy.
- If the post-mitigation model scores High in any category, don't deploy it until implementing mitigations such that it drops to Medium.
- If the post-mitigation model scores Critical in any category, stop developing it until implementing mitigations such that it drops to High.
- If the pre-mitigation model scores High in any category, harden security to prevent exfiltration of model weights. (Details basically unspecified for now.)
This outlines a very iterative procedure. If models started hitting the thresholds, and this logic was applied repeatedly, the effect could be pushing the problems under the rug. At levels of ability sufficient to trigger those thresholds, I would be worried about staying on the bleeding edge of danger, tweaking the model until problems don't show up in evals.
I guess this strategy is intended to be complemented with the superalignment effort, and not to be pushed on indefinitely and the main alignment strategy.
Replies from: Zach Stein-Perlman, roger-d-1↑ comment by Zach Stein-Perlman · 2023-12-18T23:30:16.022Z · LW(p) · GW(p)
Some personal takes in response:
- Yeah, largely the letter of the law isn't sufficient.
- Some evals are hard to goodhart. E.g. "can red-teamers demonstrate problems (given our mitigations)" is pretty robust — if red-teamers can't demonstrate problems, that's good evidence of safety (for deployment with those mitigations), even if the mitigations feel jury-rigged.
- Yeah, this is intended to be complemented by superalignment.
↑ comment by RogerDearnaley (roger-d-1) · 2023-12-19T06:21:30.895Z · LW(p) · GW(p)
Some mitigation strategies are more thorough than others. For example, if they hit a High on bioweapons, and responded by filtering a lot of relevant bits of Biological literature out of the pretaining set until it dropped to Medium, I wouldn't be particularly concerned about that information somehow sneaking back in as the model later got stronger. Though if you're trying to censor specific knowledge, that might become more challenging for a much more capable models that could have some ability to recreate the missing bits by logical inference from nearby material that wan't censored out. This is probably more of concern for theory than for practical knowledge.
comment by Zach Stein-Perlman · 2023-12-19T18:00:45.449Z · LW(p) · GW(p)
Added to the post:
Replies from: roger-d-1Edit, one day later: the structure seems good, but I'm very concerned that the thresholds for High and Critical risk in each category are way too high, such that e.g. a system could very plausibly kill everyone without reaching Critical in any category. See pp. 8–11. If so, that's a fatal flaw for a framework like this. I'm interested in counterarguments; for now, praise mostly retracted; oops. I still prefer this to no RSP-y-thing, but I was expecting something stronger from OpenAI. I really hope they lower thresholds for the finished version of this framework.
↑ comment by RogerDearnaley (roger-d-1) · 2023-12-21T08:01:49.420Z · LW(p) · GW(p)
My impression was that (other than Autonomy) High means "effective & professionally skilled human levels of ability at creating this type of risk" and Critical means "superhuman levels of ability at creating this type of risk". I assume their rationale is that we already have a world containing plenty of people with human levels of ability to create risk, and we're not dead yet. I think their threshold for High may be a bit too high on Persuasion, by comparing to very rare, really exceptional people (by "country-wide change agents" I assume they mean people like Nelson Mandela or Barack Obama): we don't have a lot of those, especially not willing and able to work for a O(cents) per thousand tokens for anyone. I'd have gone with a lower bar like "as persuasive as a skilled & capable professional negotiator, politician+speechwriter team, or opinion writer": i.e. someone with charisma and a way with words, but not once-in-a-generation levels of charisma.
comment by Zach Stein-Perlman · 2024-02-25T21:00:16.400Z · LW(p) · GW(p)
New misc remark:
It's not clear how the PF interacts with sharing models with Microsoft (or others). In particular, if OpenAI is required to share its models with Microsoft and Microsoft can just deploy them, even a great PF wouldn't stop dangerous models from being deployed. See OpenAI-Microsoft partnership.
Replies from: ryan_greenblatt↑ comment by ryan_greenblatt · 2024-02-25T21:42:56.841Z · LW(p) · GW(p)
Note that the openai-microsoft deal stops at AGI. We might hope that AGI will be invoked prior to models which are existentially dangerous.
Replies from: Zach Stein-Perlman↑ comment by Zach Stein-Perlman · 2024-02-25T21:45:03.044Z · LW(p) · GW(p)
AGI is defined as "a highly autonomous system that outperforms humans at most economically valuable work." We can hope, but it's definitely not clear that AGI comes before existentially-dangerous-AI.
comment by Charbel-Raphaël (charbel-raphael-segerie) · 2024-01-06T23:10:15.351Z · LW(p) · GW(p)
Why wasn't this crossposted to the alignment forum?