Documentation Index Fetch the complete documentation index at: https://hydroxai.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Overview
The NIST AI Risk Management Framework (AI RMF 1.0) , published by the National Institute of Standards and Technology in January 2023, provides a voluntary framework for managing risks throughout the AI lifecycle. It is designed to help organizations design, develop, deploy, and use AI systems in ways that are trustworthy and responsible.
NIST AI RMF is becoming the de facto standard for AI governance in the United States, referenced by federal procurement policies and increasingly adopted by private sector organizations.
Core functions
The framework is organized around four core functions:
Govern
Establish and maintain policies, processes, and organizational structures for AI risk management.
Activity Description Policies & procedures Define AI governance policies, risk tolerances, and escalation procedures Roles & responsibilities Assign accountable individuals for AI risk management across the organization Culture Foster an organizational culture that prioritizes trustworthy AI Third-party management Establish oversight for AI components from external providers
Map
Identify and contextualize risks associated with AI systems.
Activity Description Context establishment Document the intended purpose, scope, and operational environment of the AI system Stakeholder identification Identify all parties affected by the AI system’s outputs and decisions Risk identification Catalog potential risks including bias, safety, security, and privacy concerns Benefits & costs Assess the tradeoffs between AI deployment benefits and potential harms
Measure
Analyze and assess identified AI risks using quantitative and qualitative methods.
Activity Description Risk assessment Evaluate likelihood and impact of identified risks Testing & evaluation Conduct systematic testing including red-teaming and adversarial evaluation Metrics & monitoring Define key risk indicators and establish ongoing monitoring Bias measurement Quantify bias across demographics, use cases, and deployment contexts
Manage
Prioritize, respond to, and monitor AI risks on an ongoing basis.
Activity Description Risk prioritization Rank risks by severity and allocate mitigation resources Risk response Implement controls, mitigations, and safeguards Continuous monitoring Track risk levels over time with dashboards and alerts Incident response Establish procedures for responding to AI safety incidents
Trustworthy AI characteristics
NIST AI RMF defines seven characteristics of trustworthy AI:
Valid & reliable — The AI system performs as intended with consistent results
Safe — The system does not endanger life, health, property, or the environment
Secure & resilient — Protected against adversarial attacks and graceful under failure
Accountable & transparent — Decisions are explainable and responsibility is clear
Explainable & interpretable — Outputs can be understood by stakeholders
Privacy-enhanced — Personal data is protected throughout the AI lifecycle
Fair with harmful bias managed — Bias is identified, measured, and mitigated
How Know Your AI maps to NIST AI RMF
NIST Function Know Your AI Feature Govern Workspace roles, access controls, compliance dashboards Map Product configuration, dataset selection, risk categorization Measure Evaluation with LLM-as-Judge, security scoring, bias datasets Manage Monitoring dashboards, firewall, continuous evaluation scheduling
Trustworthy Characteristic Know Your AI Coverage Valid & reliable Evaluation scoring, quality metrics Safe Safety datasets, harmful content detection Secure & resilient Red-team testing, jailbreak & prompt injection datasets Accountable & transparent Evaluation evidence, tracing, audit logs Explainable & interpretable Per-prompt judge analysis with reasoning Privacy-enhanced CCPA/CPRA compliance analysis, PII leakage detection Fair with bias managed Bias detection datasets
Resources
Compliance How Know Your AI automates regulatory compliance.
Evaluation Run NIST-aligned evaluations on your AI systems.