Actionable-guidance and roadmap recommendations for the NIST AI Risk Management Framework

post by Dan H (dan-hendrycks), Tony Barrett · 2022-05-17T15:26:23.201Z · LW · GW · None comments


  Background on the NIST AI RMF
  Summary of our Working Paper
  Key Sections of our Working Paper
  Next Steps
No comments

Updated 13 September 2022 with a link to our arXiv paper and corrections to out-of-date items

This is a linkpost to our working paper “Towards AI Standards Addressing AI Catastrophic Risks: Actionable-Guidance and Roadmap Recommendations for the NIST AI Risk Management Framework”, which we co-authored with our UC Berkeley colleagues Jessica Newman and Brandie Nonnecke. Here is a link:

We seek feedback from readers considering catastrophic risks as part of their work on AI safety and governance. Please email feedback to Tony Barrett at

If you are providing feedback on the draft guidance in this document, in addition to any comments via email, it would be particularly helpful if you answer the questions in Appendix 2 of this document or in the following Google Form: 

We may update the links or content in this post to reflect the latest version of the document. 

Background on the NIST AI RMF

The National Institute of Standards and Technology (NIST) is currently developing the NIST Artificial Intelligence Risk Management Framework, or AI RMF. NIST intends the AI RMF as voluntary guidance on AI risk assessment and other AI risk management processes for AI developers, users, deployers, and evaluators. NIST plans to release Version 1.0 of the AI RMF in early 2023.
As voluntary guidance, NIST would not impose “hard law” mandatory requirements for AI developers or deployers to use the AI RMF. However, AI RMF guidance would be part of “soft law” norms and best practices, which AI developers and deployers would have incentives to follow as appropriate. For example, insurers or courts may expect AI developers and deployers to show reasonable usage of relevant NIST AI RMF guidance as part of due care when developing or deploying AI systems in high-stakes contexts, in much the same way that NIST Cybersecurity Framework guidance can be used as part of demonstrating due care for cybersecurity. In addition, elements of soft-law guidance are sometimes adapted into hard-law regulations, e.g., by mandating that particular industry sectors comply with specific standards.

Summary of our Working Paper

In this document, we provide draft elements of actionable guidance focused primarily on identifying and managing risks of events with very high or catastrophic consequences, intended to be easily incorporated by NIST into the AI RMF. We also provide our methodology for development of our recommendations. 

We provide actionable-guidance recommendations for AI RMF 1.0 on:

We also provide recommendations on additional issues for NIST to address as part of the roadmap for later versions of the AI RMF or supplementary publications, on the grounds that they are critical topics but appropriate guidance development would take additional time. Our recommendations for the AI RMF roadmap include:

Key Sections of our Working Paper

Readers considering catastrophic risks as part of their work on AI safety and governance may be most interested in the following sections:

Next Steps

As mentioned above, feedback to Tony Barrett ( would be helpful. We will consider feedback as we work on revised versions. These will inform our recommendations to NIST on how best to address catastrophic risks and related issues in the NIST AI RMF, as well as our follow-on work for standards-development and AI governance forums.

None comments

Comments sorted by top scores.