Ready-to-use code patterns for common production scenarios. Each recipe is self-contained — copy, paste, and adapt.Documentation Index
Fetch the complete documentation index at: https://hydroxai.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Recipe 1: Protected chatbot
A customer-facing chatbot with full monitoring and input/output firewall protection:import * as KnowYourAI from '@know-your-ai/node';
import { firewallIntegration } from '@know-your-ai/firewall';
import { GoogleGenAI } from '@google/genai';
// Initialize SDK + Firewall
KnowYourAI.init({
dsn: process.env.KNOW_YOUR_AI_DSN!,
environment: 'production',
release: process.env.APP_VERSION,
integrations: [
KnowYourAI.googleGenAIIntegration(),
firewallIntegration({
baseUrl: process.env.FIREWALL_URL!,
apiKey: process.env.FIREWALL_API_KEY!,
onInputViolation: 'block',
onOutputViolation: 'block',
riskThreshold: 0.7,
}),
],
});
const genAI = new GoogleGenAI({ apiKey: process.env.GOOGLE_API_KEY! });
const client = KnowYourAI.instrumentGoogleGenAIClient(genAI);
// Chat handler
async function handleChat(userId: string, message: string) {
try {
const chat = client.chats.create({
model: 'gemini-2.0-flash',
history: [],
});
const response = await chat.sendMessage({ message });
return { success: true, reply: response.text };
} catch (error) {
if (error instanceof KnowYourAI.HookBlockedError) {
return {
success: false,
reply: "I'm sorry, I cannot process that request. Please try rephrasing.",
blocked: true,
};
}
return { success: false, reply: 'An error occurred. Please try again.', error: true };
}
}
Recipe 2: RAG pipeline with tracing
A retrieval-augmented generation pipeline where every step is traced:import * as KnowYourAI from '@know-your-ai/node';
import { firewallIntegration } from '@know-your-ai/firewall';
import { GoogleGenAI } from '@google/genai';
KnowYourAI.init({
dsn: process.env.KNOW_YOUR_AI_DSN!,
traceMode: true,
integrations: [
KnowYourAI.googleGenAIIntegration(),
firewallIntegration({
baseUrl: process.env.FIREWALL_URL!,
apiKey: process.env.FIREWALL_API_KEY!,
onInputViolation: 'block',
onOutputViolation: 'log',
}),
],
});
const genAI = new GoogleGenAI({ apiKey: process.env.GOOGLE_API_KEY! });
const ai = KnowYourAI.instrumentGoogleGenAIClient(genAI);
async function ragQuery(userId: string, question: string) {
return KnowYourAI.withTrace(
{ name: 'rag-query', userId, metadata: { question } },
async () => {
// 1. Embed the query
const embedding = KnowYourAI.startEmbedding('embed-query', {});
embedding.setEmbeddingDetails('text-embedding-004', 1, 768);
const queryVector = await embedText(question);
embedding.end();
// 2. Retrieve relevant documents
const retriever = KnowYourAI.startRetriever('vector-search', {});
retriever.setQuery(question);
const documents = await vectorSearch(queryVector, { topK: 5 });
retriever.setDocuments(documents);
retriever.end();
// 3. Generate answer with context
const answer = await KnowYourAI.withGeneration('generate-answer', async (gen) => {
gen.setModel('gemini-2.0-flash');
const context = documents.map(d => d.content).join('\n\n');
const res = await ai.models.generateContent({
model: 'gemini-2.0-flash',
contents: `Context:\n${context}\n\nQuestion: ${question}\n\nAnswer based on the context above.`,
});
return res.text;
});
return answer;
}
);
}
rag-query (trace)
├── embed-query (embedding) — 45ms
├── vector-search (retriever) — 120ms, 5 documents
└── generate-answer (generation) — 1.2s, 850 tokens
Recipe 3: Multi-agent system
An agent that delegates to specialized sub-agents, with full tracing and safety:import * as KnowYourAI from '@know-your-ai/node';
import { firewallIntegration } from '@know-your-ai/firewall';
import { GoogleGenAI } from '@google/genai';
KnowYourAI.init({
dsn: process.env.KNOW_YOUR_AI_DSN!,
traceMode: true,
integrations: [
KnowYourAI.googleGenAIIntegration(),
firewallIntegration({
baseUrl: process.env.FIREWALL_URL!,
apiKey: process.env.FIREWALL_API_KEY!,
onInputViolation: 'block',
onOutputViolation: 'callback',
riskThreshold: 0.7,
violationCallback: async (ctx) => {
console.warn(`[Safety] ${ctx.phase} violation in ${ctx.model}: ` +
ctx.validation.risks.map(r => `${r.category}(${r.score})`).join(', '));
},
}),
],
});
const genAI = new GoogleGenAI({ apiKey: process.env.GOOGLE_API_KEY! });
const ai = KnowYourAI.instrumentGoogleGenAIClient(genAI);
async function orchestratorAgent(userMessage: string) {
return KnowYourAI.withTrace({ name: 'orchestrator' }, async () => {
const orchestrator = KnowYourAI.startAgent('orchestrator', {});
orchestrator.setAvailableTools([
{ type: 'function', name: 'billing-agent', description: 'Handles billing questions' },
{ type: 'function', name: 'tech-support-agent', description: 'Handles technical issues' },
{ type: 'function', name: 'general-agent', description: 'Handles general inquiries' },
]);
// Classify which sub-agent should handle this
const routing = await KnowYourAI.withGeneration('route', async (gen) => {
gen.setModel('gemini-2.0-flash');
const res = await ai.models.generateContent({
model: 'gemini-2.0-flash',
contents: `Classify this request into: billing, tech-support, or general.\n\nRequest: ${userMessage}`,
});
return res.text?.trim().toLowerCase();
});
// Delegate to the appropriate sub-agent
const subAgent = orchestrator.startAgent(`${routing}-agent`, {});
const response = await KnowYourAI.withGeneration('sub-agent-respond', async (gen) => {
gen.setModel('gemini-2.0-flash');
const res = await ai.models.generateContent({
model: 'gemini-2.0-flash',
contents: `You are a ${routing} specialist. Help the user:\n\n${userMessage}`,
});
return res.text;
});
subAgent.setFinalAction(`Handled ${routing} request`);
subAgent.end();
orchestrator.incrementIterations();
orchestrator.setFinalAction(`Routed to ${routing}-agent`);
orchestrator.end();
return response;
});
}
orchestrator (trace)
└── orchestrator (agent) — 3 tools, 1 iteration
├── route (generation) — 80ms
└── billing-agent (agent)
└── sub-agent-respond (generation) — 950ms
Recipe 4: Strict enterprise setup
Maximum security — block everything suspicious, log everything, cap costs:import * as KnowYourAI from '@know-your-ai/node';
import { firewallIntegration } from '@know-your-ai/firewall';
KnowYourAI.init({
dsn: process.env.KNOW_YOUR_AI_DSN!,
environment: 'production',
traceMode: true,
// Don't send message content to analytics (privacy)
recordInputs: false,
recordOutputs: false,
recordRequestParams: true,
enableCostEstimation: true,
integrations: [
KnowYourAI.googleGenAIIntegration(),
firewallIntegration({
baseUrl: process.env.FIREWALL_URL!,
apiKey: process.env.FIREWALL_API_KEY!,
onInputViolation: 'block',
onOutputViolation: 'block',
riskThreshold: 0.5, // Aggressive — flag medium-confidence risks
}),
],
});
const hooks = KnowYourAI.getHookManager();
// Only allow approved models
hooks.addBeforeRequestHook(async (ctx) => {
const APPROVED = ['gemini-2.0-flash'];
if (!APPROVED.includes(ctx.model)) {
return { action: 'block', reason: `Unauthorized model: ${ctx.model}` };
}
}, 'model-policy');
// Cap all requests to 4096 tokens
hooks.addBeforeRequestHook(async (ctx) => {
return {
action: 'modify',
modified: {
requestParams: { ...ctx.requestParams, maxTokens: Math.min(ctx.requestParams?.maxTokens || 4096, 4096) },
},
};
}, 'token-cap');
// Force temperature to 0 for deterministic outputs
hooks.addBeforeRequestHook(async (ctx) => {
return {
action: 'modify',
modified: {
requestParams: { ...ctx.requestParams, temperature: 0 },
},
};
}, 'deterministic');
// Block any output containing PII patterns
hooks.addAfterResponseHook(async (ctx) => {
if (!ctx.output) return;
const piiPatterns = [
/\b\d{3}-\d{2}-\d{4}\b/, // SSN
/\b\d{4}[\s-]?\d{4}[\s-]?\d{4}[\s-]?\d{4}\b/, // Credit card
/\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,}\b/i, // Email
];
for (const pattern of piiPatterns) {
if (pattern.test(ctx.output)) {
return { action: 'block', reason: 'PII detected in model output' };
}
}
}, 'pii-output-guard');
Recipe 5: Development / debugging
Verbose logging for a dev environment — see everything in the console:import * as KnowYourAI from '@know-your-ai/node';
KnowYourAI.init({
dsn: process.env.KNOW_YOUR_AI_DSN!,
environment: 'development',
debug: true,
onCapture: (data) => {
console.log(`\n${'═'.repeat(60)}`);
console.log(`📡 ${data.provider}/${data.model} — ${data.operation}`);
console.log(` Duration: ${data.duration}ms`);
console.log(` Tokens: in=${data.tokenUsage?.inputTokens} out=${data.tokenUsage?.outputTokens}`);
console.log(` Cost: $${data.cost?.totalCost?.toFixed(6) || 'N/A'}`);
if (data.toolCalls?.length) {
console.log(` Tools: ${data.toolCalls.map(t => t.name).join(', ')}`);
}
if (data.error) {
console.log(` ❌ Error: ${data.error.type} — ${data.error.message}`);
}
console.log(`${'═'.repeat(60)}\n`);
},
integrations: [KnowYourAI.googleGenAIIntegration()],
});
// Configure tracing to print to console
KnowYourAI.configureTracing({
transport: KnowYourAI.createConsoleTraceTransport(),
});
Recipe 6: Express.js middleware
Wrap your Express API routes with per-request tracing:import express from 'express';
import * as KnowYourAI from '@know-your-ai/node';
import { firewallIntegration } from '@know-your-ai/firewall';
import { GoogleGenAI } from '@google/genai';
KnowYourAI.init({
dsn: process.env.KNOW_YOUR_AI_DSN!,
traceMode: true,
integrations: [
KnowYourAI.googleGenAIIntegration(),
firewallIntegration({
baseUrl: process.env.FIREWALL_URL!,
apiKey: process.env.FIREWALL_API_KEY!,
onInputViolation: 'block',
onOutputViolation: 'log',
}),
],
});
const genAI = new GoogleGenAI({ apiKey: process.env.GOOGLE_API_KEY! });
const ai = KnowYourAI.instrumentGoogleGenAIClient(genAI);
const app = express();
app.use(express.json());
app.post('/api/chat', async (req, res) => {
const { message, userId } = req.body;
try {
const reply = await KnowYourAI.withTrace(
{ name: 'api-chat', userId },
async () => {
const response = await ai.models.generateContent({
model: 'gemini-2.0-flash',
contents: message,
});
return response.text;
}
);
res.json({ reply });
} catch (error) {
if (error instanceof KnowYourAI.HookBlockedError) {
res.status(400).json({ error: 'Your message was blocked by our safety system.' });
} else {
res.status(500).json({ error: 'Internal server error.' });
}
}
});
// Flush on shutdown
process.on('SIGTERM', async () => {
await KnowYourAI.getClient()?.flush(5000);
process.exit(0);
});
app.listen(3000);