Prompt Inversion > Blog > Navigating NIST’s AI Risk Management Framework 1.0

May 19, 2025

Navigating NIST’s AI Risk Management Framework 1.0

We discuss the NIST AI Risk Management Framework (AI RMF) 1.0, a voluntary guide for mitigating enterprise AI risk.

The National Institute of Standards and Technology (NIST) created the AI Risk Management Framework (AI RMF) to help organizations responsibly manage the risks tied to artificial intelligence. The framework isn’t mandatory, but it does lay out a clear structure for thinking clearly about AI risk. At its core, it’s built around four related functions: Map, Measure, Manage, and Govern.

MAP

The first function, MAP, is all about testing assumptions about AI model use and deployment. Before building or deploying any AI system, you need to understand what it’s for, who it affects, and what could go wrong. That includes intended use cases, as well as unintended ones that could emerge from misuse or model drift over time. This step is where foundational decisions are made about whether the AI system should be built or deployed at all. Sometimes, once the risks and uncertainties are fully mapped out, the answer is that an enterprise can’t build it.

A major part of MAP is setting up a TEVV (Test, Evaluation, Verification, and Validation) suite, which spans the entire model lifecycle. During the design phase, it means testing your assumptions and understanding how well your data reflects real-world deployment conditions. When the system moves into development, TEVV becomes more focused on internal performance: validating model performance, refining inputs, and anticipating edge cases. During deployment, TEVV shifts to evaluating user interaction and regulatory compliance. TEVV continues through post-deployment monitoring, incident tracking, and system recalibration. It’s not enough to launch a model and ignore potential misuse. An organization needs to watch for changes and build processes for redress if things go wrong.

During the building process, risks from third-party data and external software need to be considered. If your system is relying on external models, data vendors, or open source libraries, those components come with their own risk profiles which need to be accounted for.

MEASURE

MEASURE builds directly on the MAP phase. Once AI model risks are identified, you need to figure out how to quantify them. That means creating evaluation metrics and testing strategies, as well as running tests that reflect the system’s real-world objectives. For example, a generative chatbot requires different evaluation techniques than an object detection vision model. In some cases, accuracy might be the key concern. In others, fairness, explainability, or robustness could matter more.

The MEASURE phase should also include stress testing. Enterprises need to ask how the system holds up under unusual input, and if it is vulnerable to adversarial attacks or manipulations For instance, an object detection model may require evaluations for appropriate labeling in order to ensure fairness. All of this information feeds forward into the next phase: MANAGE. Better measurement reveals gaps in the original MAP phase and should be used to continuously iterate on the original model design and deployment.

MANAGE

The MANAGE function takes insights from MEASURE and turns them into action items. In this phase you design and implement the controls and failsafes that reduce the likelihood of harmful outcomes. MANAGE includes basic mitigation strategies like rate-limiting inputs or putting guardrails on the generated output. For LLMs, it might involve prompt rewriting or filters that detect dangerous output before it’s displayed to users.

It also involves infrastructure decisions. If an AI system goes off the rails, an enterprise needs to ask about the fallback plan under catastrophic AI error. There needs to be an option to take an AI system offline or a protocol to issue immediate updates. Such contingency plans also need an enterprise to consider who monitors an incident, how it gets escalated, and how it gets addressed. The MANAGE phase is not necessarily a technical guide, but also emphasizes key internal workflows to handle AI incidents within an organization.

GOVERN

Finally, there’s the GOVERN phase. Governance establishes the policies, roles, organizational culture which makes effective risk management possible. This includes setting clear expectations. For example, before any new AI system is launched, it should go through the MAP and MEASURE processes using standard procedures. Governance teams should verify that appropriate risk mitigation steps have been taken, as well as lay out standard procedures to handle AI incidents.

Governance also means defining ownership of AI risk, such as establishing who is responsible for a given model. It should also specify who has the authority to approve AI deployment, oversee monitoring, and pull the plug if needed. Legal, ethics, and compliance teams should be an important part of the governance board. Over time, governance ensures AI systems are subject to ongoing review. This includes periodic audits and updates based on regulatory changes, as well as post-deployment impact assessments. It also allows institutions to learn from their prior mistakes through comprehensive documentation.

These four function from the NIST AI RMF provide a structured way to think about and act on AI risk. While not legally binding, it's likely that similar regulations will eventually be in place for enterprise AI deployments. Building a robust risk management framework now can effectively prepare an enterprise for future AI regulations.

Recent blog posts

Agents

Beyond Chatbots: Real-World Agentic Workflows at PartyBus

We highlight our recent case study implementing agentic workflows for hiring managers

June 9, 2025
Read more
Security

Navigating the EU AI Act

We discuss the European Union AI Act and how it affects businesses

June 2, 2025
Tejas Gopal
Read more
Security

Navigating NIST’s GenAI Risk Management Framework

We discuss the NIST GenAI Risk Management Framework, a voluntary guide for mitigating enterprise Generative AI risk.

May 26, 2025
Tejas Gopal
Read more