Skip to main content

Documentation Index

Fetch the complete documentation index at: https://hydroxai.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Core platform

FeatureDescription
WorkspacesMulti-tenant project isolation with tiers (Free, Pro, Enterprise), quotas, and role-based access
ProductsThree product types — Website, API, and Streaming API — each with onboarding roadmap and connection wizard
Members & invitationsInvite by email with roles (owner, admin, developer, viewer) and invitation lifecycle tracking
Usage & quotasMonthly quota allocation with usage ledger and reset tracking

Evaluations

FeatureDescription
Model Evaluation (API Mode)Connect your AI model’s API endpoint and run automated red-team testing with attack datasets
Chatbot Evaluation (Website Mode)Evaluate live chatbot websites using a browser control agent for end-to-end red-team testing
LLM-as-JudgeConfigurable judgment model and prompt with vulnerability scoring
Security testingRed-team your AI with attack prompts and get per-prompt pass/fail
Real-time consoleStreaming execution logs with prompt/response/judgment bubbles
Scheduled evaluationsCron-based scheduling (hourly, daily, weekly, monthly, custom) with enable/disable
Evaluation MarketBrowse pre-configured evaluation templates (Safety, Compliance, Quality, Performance)
Run historyFull test run history with sortable/filterable table
Screenshot libraryBrowse captured screenshots from chatbot evaluations

Monitoring & tracing

FeatureDescription
SDK integration@know-your-ai SDK with DSN for automatic event capture
Monitoring dashboardRequests, tokens, cost, latency, errors, provider/model distribution
TracingSpan-tree visualization with d3 — supports generation, agent, tool, chain, retriever, evaluator, guardrail spans
Token usage chartsTime-series input/output/total token charts with configurable granularity

Firewall

FeatureDescription
Firewall API keysGenerate and manage firewall API keys per product
Validation logsHistory of all firewall validations with risk categorization, score, and reason
Token usageInput/output token tracking with time-series analytics

Compliance

FeatureDescription
CCPA/CPRA analysisAutomated compliance judgment with 3-tier violation scoring (Direct, Indirect, Ancillary)
Severity levelsNone, low, medium, high, critical severity classification
Evidence storePersistent evidence records with full prompt/response pairs and legal references
Regulatory coverageEU AI Act, NIST AI RMF coverage with baseline vs. firewall comparison
Per-run reportsDetailed compliance report for each evaluation run

AI quality metrics

Nine quality dimensions (some coming soon):
  • Overview, Hallucination Risk, Task Fulfillment, Reasoning Quality, Math & Calculation, Creativity, Empathy, Style & Personality, Safety & Risk

Additional features

FeatureDescription
AI chatBuilt-in Gemini chat panel with session management and context-aware conversations
Security reportVisual before/after comparison showing baseline vs. firewall results
SupportIn-app bug reports, attack reports, billing issues, and feature requests
InternationalizationMulti-language UI support
ThemeLight and dark mode

Evaluation deep dive

Learn about Model Evaluation and Chatbot Evaluation.

Monitoring deep dive

Learn about SDK integration and tracing.