AI Risk Management Framework | NIST
post by DragonGod · 2023-01-26T15:27:19.807Z · LW · GW · 4 commentsThis is a link post for https://www.nist.gov/itl/ai-risk-management-framework
Contents
4 comments
On January 26, 2023, NIST released the AI Risk Management Framework (AI RMF 1.0) along with a companion NIST AI RMF Playbook, AI RMF Explainer Video, an AI RMF Roadmap, AI RMF Crosswalk, and various Perspectives. Watch the event here.
In collaboration with the private and public sectors, NIST has developed a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI). The NIST AI Risk Management Framework (AI RMF) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
Released on January 26, 2023, the Framework was developed through a consensus-driven, open, transparent, and collaborative process that included a Request for Information, several draft versions for public comments, multiple workshops, and other opportunities to provide input. It is intended to build on, align with, and support AI risk management efforts by others.
A companion NIST AI RMF Playbook also has been published by NIST along with an AI RMF Roadmap, AI RMF Crosswalk, and various Perspectives. In addition, NIST is making available a video explainer about the AI RMF.
FLI also released a statement on NIST's framework:
FUTURE OF LIFE INSTITUTEStatement
The Future of Life Institute applauds NIST for spearheading a multiyear and stakeholder initiative to improve the management of risks in the form of the Artificial Intelligence Risk Management Framework (AI RMF). As an active participant in its development process, we view the AI RMF as a crucial step in fostering a culture of risk identification and mitigation in the US and abroad.
With this launch, NIST has created a global public good. The AI RMF decreases barriers to examining the implications of Al on individuals, communities, and the planet by organizations charged with designing, developing, deploying, or using this technology. Moreover, we believe that this effort represents a critical opportunity for institutional leadership to establish clear boundaries around acceptable outcomes for Al usage. Many firms have already set limitations on the development of weapons and on activities that lead to clear physical or psychological harm, among others.The release of version 1.0 of the AI RMF is not the conclusion of this effort. We praise NIST's commitment to update the document continuously as our common understanding of Al's impact on society evolves. In addition, we appreciate that stakeholders will be given concrete guidance for implementing these ideas via the agency's efforts in the form of a "playbook." External to NIST, our colleagues at the University of California, Berkeley are complementing the AI RMF with a profile dedicated to increasingly multi or general-purpose Al systems.
Lastly, we recognize that for the AI RMF to be effective, it must be applied by stakeholders. In a perfect world, organizations would devote resources to identifying and mitigating the risks from Al intrinsically. In reality, incentives are needed to push this process forward. We/you/society can help to create these incentives in the following ways:
- Making compliance with the AI RMF a submission requirement at prestigious Al conferences;
- Having insurance companies provide coverage benefits to entities that evaluate Al risks through the AI RMF or another similar instrument;
- Convincing local, state, or the federal government to prioritize Al procurement based on demonstrable compliance with the AI RMF; and,
- Generating positive consumer sentiment for organizations that publicly express devoting resources to the AI RMF process.
4 comments
Comments sorted by top scores.
comment by chanamessinger (cmessinger) · 2023-01-27T01:52:24.023Z · LW(p) · GW(p)
Does it say anything about AI risk that is about the real risks? (Have not clicked the links, the text above did not indicate to me one way or another).
Replies from: MaxRa, Evan R. Murphy↑ comment by MaxRa · 2023-01-30T18:25:31.570Z · LW(p) · GW(p)
The report mentioned "harm to the global financial system [and to global supply chains]" somewhere as examples, which I found noteworthy for being very large scale harms and therefore plausibly requiring AI systems that the AI x-risk community is most worried about.
↑ comment by Evan R. Murphy · 2023-01-27T22:43:34.484Z · LW(p) · GW(p)
I'm not sure if the core NIST standards go into catastrophic misalignment risk, but Barrett et al.'s supplemental guidance on the NIST standards does. I was a reviewer on that work, and I think they have more coming (see link in my first comment on this post for their first part).
comment by Evan R. Murphy · 2023-01-26T17:54:02.692Z · LW(p) · GW(p)
Been in the works for awhile. Good to know it's officially out, thanks.
Related: https://www.alignmentforum.org/posts/JNqXyEuKM4wbFZzpL/actionable-guidance-and-roadmap-recommendations-for-the-nist-1 [AF · GW]