Know Your AI is a security, evaluation, and monitoring platform built to help teams understand, test, and protect AI systems end-to-end. It combines attack-driven datasets, automated LLM judging, compliance analysis, SDK-based monitoring, and a firewall — all in one place — so teams can ship with confidence. This site contains the product overview, quickstart guides, SDK & CLI documentation, and feature docs to get your team up and running.Documentation Index
Fetch the complete documentation index at: https://hydroxai.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Platform quickstart
Create your first workspace, connect a product, and run a security evaluation.
SDK documentation
Install the SDK to monitor, trace, evaluate, and protect your AI apps.
CLI documentation
Run evaluations and inspect results from your terminal.
What is Know Your AI?
Know Your AI is an evaluation, monitoring, and compliance platform for AI applications. It centralizes:- Workspaces for organizing teams, products, and access controls
- Products representing the AI systems you evaluate (Website, API, or Streaming API)
- Datasets from the Marketplace and your own uploads — covering attack prompts, safety tests, and benchmarks
- Evaluations with configurable LLM-as-Judge scoring and automated compliance analysis
- Monitoring with SDK-based dashboards for requests, tokens, latency, cost, and errors
- Tracing with span-tree visualization for debugging AI interactions
- Firewall for real-time input/output validation with risk categorization
- Compliance dashboards for CCPA/CPRA violation tracking with evidence trails
Who is it for?
- Product teams validating new model releases and prompt changes
- ML/AI engineers red-teaming models with attack datasets
- Security teams testing for jailbreaks, prompt injection, and data extraction
- Risk & compliance teams tracking CCPA/CPRA, EU AI Act, and NIST AI RMF policies
- Engineering teams monitoring production AI with SDK integration
Core concepts at a glance
Product
Explore the platform modules and how they work together.
Workspace
Understand project boundaries, roles, and access controls.
Evaluation
Learn how LLM-as-Judge scoring and security testing work.
Monitoring
Track live performance with SDK dashboards and tracing.
Typical workflow
Connect and evaluate
Connect your product endpoint and run security evaluations with LLM-as-Judge scoring.
Where to go next
Get started
Launch your first evaluation.
SDK
Monitor and protect AI with the SDK.
CLI
Run evaluations from the terminal.
Datasets
Browse curated attack datasets and methods.
Monitoring
Track model health with SDK monitoring.
Firewall
Block dangerous inputs in real time.