Skip to main content

Documentation Index

Fetch the complete documentation index at: https://hydroxai.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

AI applications in production face two critical risks:
  1. Unpredictable agent behavior — AI agents can hallucinate, leak sensitive data, call unauthorized tools, or drift from their intended goals
  2. Harmful content generation — Models can produce toxic, biased, or policy-violating outputs that reach your end users
Know Your AI’s Monitoring and Firewall work together to address both:

Monitoring

See everything. Track every AI request, response, tool call, token, cost, and error in real time. Trace multi-step agent workflows as span trees. Catch anomalies before they become incidents.

Firewall

Block threats. Validate every input and output against content safety policies. Block jailbreak attempts, prompt injection, PII leakage, and harmful content — before they reach your users.

Architecture

User Input


┌─────────────────────────┐
│   Firewall (Input)      │  ← Block jailbreaks, prompt injection, PII
│   beforeRequest hook    │
└────────────┬────────────┘
             │  ✔ Safe

┌─────────────────────────┐
│   AI Model              │  ← Your LLM (Gemini, GPT, Claude, etc.)
│                         │
└────────────┬────────────┘


┌─────────────────────────┐
│   Firewall (Output)     │  ← Flag harmful, biased, or toxic responses
│   afterResponse hook    │
└────────────┬────────────┘
             │  ✔ Safe

┌─────────────────────────┐
│   Monitoring            │  ← Capture tokens, cost, latency, traces
│   Know Your AI Backend  │
└─────────────────────────┘


        User Response

What you can protect against

ThreatMonitoring detectsFirewall blocks
Jailbreak attemptsLogs the input for reviewBlocks before reaching model
Prompt injectionTracks anomalous inputsBlocks injected instructions
PII leakageFlags responses containing personal dataBlocks PII from being returned
Harmful contentCaptures and categorizes outputBlocks toxic/hateful responses
Excessive agencyTraces all tool calls and agent steps
Cost spikesAlerts on abnormal token usage
Latency degradationTracks TTFB and response times
Model errorsCaptures error types and rates

Getting started

1

Install the SDK

npm install @know-your-ai/node @know-your-ai/firewall
2

Get your DSN and Firewall API key

From the Know Your AI dashboard:
  • DSN: Settings → API Keys
  • Firewall API key: Product → Firewall → Generate Key
3

Initialize with both integrations

import * as KnowYourAI from '@know-your-ai/node';
import { firewallIntegration } from '@know-your-ai/firewall';

KnowYourAI.init({
  dsn: process.env.KNOW_YOUR_AI_DSN!,
  integrations: [
    KnowYourAI.googleGenAIIntegration(),
    firewallIntegration({
      baseUrl: process.env.FIREWALL_URL!,
      apiKey: process.env.FIREWALL_API_KEY!,
      onInputViolation: 'block',
      onOutputViolation: 'log',
    }),
  ],
});
4

View dashboards

Open the Monitoring and Firewall Logs pages in your product dashboard to see real-time data.

Next steps

Real-time monitoring

Set up production monitoring with dashboards, tracing, and alerts.

Content firewall

Configure input/output validation and block harmful content.

Agent safety

Monitor and protect multi-step AI agent workflows.

Production recipes

Copy-paste recipes for common production setups.