Skip to main content

Documentation Index

Fetch the complete documentation index at: https://hydroxai.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Overview

The NIST AI Risk Management Framework (AI RMF 1.0), published by the National Institute of Standards and Technology in January 2023, provides a voluntary framework for managing risks throughout the AI lifecycle. It is designed to help organizations design, develop, deploy, and use AI systems in ways that are trustworthy and responsible. NIST AI RMF is becoming the de facto standard for AI governance in the United States, referenced by federal procurement policies and increasingly adopted by private sector organizations.

Core functions

The framework is organized around four core functions:

Govern

Establish and maintain policies, processes, and organizational structures for AI risk management.
ActivityDescription
Policies & proceduresDefine AI governance policies, risk tolerances, and escalation procedures
Roles & responsibilitiesAssign accountable individuals for AI risk management across the organization
CultureFoster an organizational culture that prioritizes trustworthy AI
Third-party managementEstablish oversight for AI components from external providers

Map

Identify and contextualize risks associated with AI systems.
ActivityDescription
Context establishmentDocument the intended purpose, scope, and operational environment of the AI system
Stakeholder identificationIdentify all parties affected by the AI system’s outputs and decisions
Risk identificationCatalog potential risks including bias, safety, security, and privacy concerns
Benefits & costsAssess the tradeoffs between AI deployment benefits and potential harms

Measure

Analyze and assess identified AI risks using quantitative and qualitative methods.
ActivityDescription
Risk assessmentEvaluate likelihood and impact of identified risks
Testing & evaluationConduct systematic testing including red-teaming and adversarial evaluation
Metrics & monitoringDefine key risk indicators and establish ongoing monitoring
Bias measurementQuantify bias across demographics, use cases, and deployment contexts

Manage

Prioritize, respond to, and monitor AI risks on an ongoing basis.
ActivityDescription
Risk prioritizationRank risks by severity and allocate mitigation resources
Risk responseImplement controls, mitigations, and safeguards
Continuous monitoringTrack risk levels over time with dashboards and alerts
Incident responseEstablish procedures for responding to AI safety incidents

Trustworthy AI characteristics

NIST AI RMF defines seven characteristics of trustworthy AI:
  1. Valid & reliable — The AI system performs as intended with consistent results
  2. Safe — The system does not endanger life, health, property, or the environment
  3. Secure & resilient — Protected against adversarial attacks and graceful under failure
  4. Accountable & transparent — Decisions are explainable and responsibility is clear
  5. Explainable & interpretable — Outputs can be understood by stakeholders
  6. Privacy-enhanced — Personal data is protected throughout the AI lifecycle
  7. Fair with harmful bias managed — Bias is identified, measured, and mitigated

How Know Your AI maps to NIST AI RMF

NIST FunctionKnow Your AI Feature
GovernWorkspace roles, access controls, compliance dashboards
MapProduct configuration, dataset selection, risk categorization
MeasureEvaluation with LLM-as-Judge, security scoring, bias datasets
ManageMonitoring dashboards, firewall, continuous evaluation scheduling
Trustworthy CharacteristicKnow Your AI Coverage
Valid & reliableEvaluation scoring, quality metrics
SafeSafety datasets, harmful content detection
Secure & resilientRed-team testing, jailbreak & prompt injection datasets
Accountable & transparentEvaluation evidence, tracing, audit logs
Explainable & interpretablePer-prompt judge analysis with reasoning
Privacy-enhancedCCPA/CPRA compliance analysis, PII leakage detection
Fair with bias managedBias detection datasets

Resources

Compliance

How Know Your AI automates regulatory compliance.

Evaluation

Run NIST-aligned evaluations on your AI systems.