Documentation Index
Fetch the complete documentation index at: https://hydroxai.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
The monitoring SDK automatically captures every AI model interaction and sends telemetry to the Know Your AI dashboard. You get real-time visibility into requests, token usage, cost, latency, errors, and more.
Installation
npm install @know-your-ai/node
Basic setup
import * as KnowYourAI from '@know-your-ai/node';
import { GoogleGenAI } from '@google/genai';
// 1. Initialize the SDK with your DSN
KnowYourAI.init({
dsn: process.env.KNOW_YOUR_AI_DSN!,
integrations: [KnowYourAI.googleGenAIIntegration()],
});
// 2. Instrument your AI client
const genAI = new GoogleGenAI({ apiKey: process.env.GOOGLE_API_KEY! });
const client = KnowYourAI.instrumentGoogleGenAIClient(genAI);
// 3. Use as normal — everything is automatically tracked
const response = await client.models.generateContent({
model: 'gemini-2.0-flash',
contents: 'Hello, world!',
});
Supported operations
The Google GenAI integration automatically instruments:
| Method | Description |
|---|
models.generateContent() | Single-turn text generation |
models.generateContentStream() | Streaming text generation |
chats.create() | Create a chat session |
chat.sendMessage() | Send a message in a chat |
chat.sendMessageStream() | Stream a message in a chat |
What gets captured
Every AI call is captured as a CapturedAIData event containing:
interface CapturedAIData {
id: string; // Unique event ID
timestamp: number; // Unix timestamp
provider: string; // e.g. 'google_genai'
model: string; // e.g. 'gemini-2.0-flash'
operation: string; // e.g. 'generateContent'
duration?: number; // Response time in ms
input?: AIMessage[]; // User messages
output?: string; // AI response text
streaming?: boolean; // Was this a stream?
tokenUsage?: TokenUsage; // Input/output/total tokens
cost?: CostEstimate; // Estimated cost
latency?: LatencyMetrics; // TTFB, throughput, etc.
error?: ErrorDetails; // Error info if failed
toolCalls?: ToolCall[]; // Function/tool calls made
requestParams?: RequestParams; // temperature, maxTokens, etc.
traceId?: string; // Trace correlation
sessionId?: string; // Session correlation
environment?: string; // Environment tag
tags?: Record<string, string>; // Custom tags
}
Token usage
interface TokenUsage {
inputTokens?: number;
outputTokens?: number;
totalTokens?: number;
cachedTokens?: number;
reasoningTokens?: number;
}
Cost estimation
The SDK automatically estimates cost based on the model and token count:
interface CostEstimate {
inputCost?: number;
outputCost?: number;
totalCost?: number;
currency: string; // 'USD'
inputPricePerK?: number;
outputPricePerK?: number;
}
Latency metrics
For streaming responses, additional latency data is captured:
interface LatencyMetrics {
total: number; // Total response time (ms)
ttfb?: number; // Time to first byte (ms)
throughput?: number; // Tokens per second
chunkCount?: number; // Number of chunks
avgChunkInterval?: number; // Average ms between chunks
}
Streaming support
Streaming responses are fully supported. The SDK captures all chunks and computes streaming-specific metrics:
const stream = await client.models.generateContentStream({
model: 'gemini-2.0-flash',
contents: 'Tell me a story about a robot.',
});
for await (const chunk of stream) {
process.stdout.write(chunk.text() || '');
}
// Streaming metrics (TTFB, throughput, chunk count) are automatically captured
Chat sessions
Multi-turn conversations are tracked automatically:
const chat = client.chats.create({
model: 'gemini-2.0-flash',
history: [
{ role: 'user', parts: [{ text: 'You are a helpful assistant.' }] },
],
});
const response1 = await chat.sendMessage({ message: 'What is TypeScript?' });
const response2 = await chat.sendMessage({ message: 'How does it differ from JavaScript?' });
// Both messages are captured as individual events with session correlation
Custom callback
Use onCapture to receive every captured event in your own code:
KnowYourAI.init({
dsn: process.env.KNOW_YOUR_AI_DSN!,
onCapture: (data) => {
console.log(`[${data.provider}/${data.model}] ${data.operation}`);
console.log(` Tokens: ${data.tokenUsage?.totalTokens}`);
console.log(` Duration: ${data.duration}ms`);
console.log(` Cost: $${data.cost?.totalCost?.toFixed(6)}`);
},
integrations: [KnowYourAI.googleGenAIIntegration()],
});
Custom HTTP transport
Send events to your own backend instead of (or in addition to) the Know Your AI dashboard:
KnowYourAI.init({
transport: KnowYourAI.createHttpTransport({
endpoint: 'https://your-api.com/ai-events',
apiKey: 'your-key',
headers: {
'X-App-Name': 'my-app',
},
}),
integrations: [KnowYourAI.googleGenAIIntegration()],
});
Available transports
| Transport | Description |
|---|
createKnowYourAITransport({ dsn }) | Send to Know Your AI backend (auto-configured via DSN) |
createHttpTransport({ endpoint, apiKey }) | Send to any HTTP endpoint |
createConsoleTransport() | Print events to console (debugging) |
createNoopTransport() | Discard events (testing) |
Privacy controls
Control what data is captured:
KnowYourAI.init({
dsn: process.env.KNOW_YOUR_AI_DSN!,
recordInputs: false, // Don't capture user messages
recordOutputs: false, // Don't capture AI responses
recordRequestParams: true, // Still capture temperature, etc.
sampleRate: 0.5, // Only capture 50% of events
integrations: [KnowYourAI.googleGenAIIntegration()],
});
Per-integration privacy:
KnowYourAI.init({
dsn: process.env.KNOW_YOUR_AI_DSN!,
integrations: [
KnowYourAI.googleGenAIIntegration({
recordInputs: false,
recordOutputs: false,
recordRequestParams: true,
}),
],
});
Hooks
Hooks let you intercept and modify AI calls before they reach the model or after the response:
Before-request hook
const hookManager = KnowYourAI.getHookManager();
hookManager.addBeforeRequestHook(async (ctx) => {
// Log every request
console.log(`Calling ${ctx.model} with ${ctx.input?.length} messages`);
// Optionally block
if (ctx.input?.some(m => m.content.includes('forbidden'))) {
return { action: 'block', reason: 'Blocked by policy' };
}
// Optionally modify
return {
action: 'modify',
modified: { requestParams: { ...ctx.requestParams, temperature: 0.5 } },
};
}, 'my-policy-hook');
After-response hook
hookManager.addAfterResponseHook(async (ctx) => {
// Log every response
console.log(`${ctx.model} responded in ${ctx.duration}ms`);
if (ctx.error) {
console.error(`Error: ${ctx.error.message}`);
}
}, 'my-logging-hook');
Hook result actions
| Action | Effect |
|---|
'continue' | Proceed normally |
'block' | Throw HookBlockedError and prevent the request |
'modify' | Apply modifications to the request/response context |
Flushing
The SDK batches events and sends them periodically. To ensure all events are sent before your process exits:
// Flush all pending events (with optional timeout in ms)
await KnowYourAI.getClient()?.flush(5000);
Error tracking
Errors are automatically captured with structured metadata:
interface ErrorDetails {
type: 'rate_limit' | 'content_filter' | 'timeout' | 'invalid_request' |
'authentication' | 'network' | 'server_error' | 'unknown';
message: string;
code?: string;
statusCode?: number;
retryable?: boolean;
retryAfter?: number;
}
Full example
import * as KnowYourAI from '@know-your-ai/node';
import { GoogleGenAI } from '@google/genai';
// Initialize with all options
KnowYourAI.init({
dsn: process.env.KNOW_YOUR_AI_DSN!,
environment: 'production',
release: '1.2.0',
sampleRate: 1.0,
batchSize: 10,
flushInterval: 5000,
debug: false,
onCapture: (data) => {
console.log(`Captured: ${data.model} ${data.operation} (${data.duration}ms)`);
},
integrations: [
KnowYourAI.googleGenAIIntegration({
recordInputs: true,
recordOutputs: true,
recordRequestParams: true,
}),
],
});
// Instrument the client
const genAI = new GoogleGenAI({ apiKey: process.env.GOOGLE_API_KEY! });
const client = KnowYourAI.instrumentGoogleGenAIClient(genAI);
// Make AI calls — everything is tracked automatically
const response = await client.models.generateContent({
model: 'gemini-2.0-flash',
contents: 'What are the benefits of TypeScript?',
});
console.log(response.text);
// Ensure events are sent before exit
await KnowYourAI.getClient()?.flush();