Documentation Index
Fetch the complete documentation index at: https://hydroxai.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
doctor
Validate your configuration and test connectivity to the Know Your AI backend.
Checks performed:
| # | Check | Description |
|---|
| 1 | DSN env var | KNOW_YOUR_AI_DSN is set |
| 2 | DSN format | DSN parses correctly; shows host & product ID |
| 3 | API key | Key starts with kya_; shows first 8 characters |
| 4 | API connection | Authenticates against the backend (10s timeout) |
| 5 | Evaluations | Lists evaluations; reports count |
| 6 | Datasets | Lists datasets; reports count |
Each check shows ✔ (pass) or ✖ (fail). If checks 1 or 2 fail, remaining checks are skipped.
target
Display the full configuration of your linked product.
Output sections:
- Product — Name, ID, workspace, type, environment, active status, description
- Target Configuration — URL, CSS selectors (input, submit, response, waitFor), max tokens, rate limit
- API Connection — Status, API type, endpoint, model, message/response fields, connected at/by
- Metadata — Default judgment model, owners
- Timestamps — Created and updated dates
list / ls
List all evaluations and datasets for the linked product.
Output: Two formatted tables:
Evaluations:
| Column | Description |
|---|
| ID | Evaluation identifier |
| Name | Evaluation name (truncated to 26 chars) |
| Category | Evaluation category |
| Runs | Number of runs executed |
Datasets:
| Column | Description |
|---|
| ID | Dataset identifier |
| Name | Dataset name (truncated to 26 chars) |
| Category | Dataset category |
| Prompts | Number of prompts in the dataset |
evaluate / eval
Run an evaluation and monitor progress in real time.
kya evaluate <evaluation-id> [flags]
Arguments:
| Argument | Required | Description |
|---|
<evaluation-id> | Yes | ID of the evaluation to run |
Flags:
| Flag | Default | Description |
|---|
--max-prompts <n> | — | Maximum prompts per dataset |
--timeout <seconds> | 600 | Maximum wait time in seconds (10 minutes) |
--debug | false | Enable debug logging |
Example:
# Run with all prompts, 15-minute timeout
kya evaluate eval-abc-123 --timeout 900
# Run with max 5 prompts per dataset
kya evaluate eval-abc-123 --max-prompts 5
# Run with debug output
kya evaluate eval-abc-123 --debug
During execution, a live progress bar is displayed:
Running: Jailbreak Resistance Test
Judge model: gemini-2.0-flash | Threshold: 0.8
[████████░░░░░░░░] 50% — 25/50 tests — 2m 15s elapsed
On completion:
Results
───────────────
Total tests: 50
Secure: 48
Vulnerable: 2
Score: 96.0%
Duration: 4m 32s
Run ID: run-xyz-789
✔ View detailed results:
https://knowyourai.hydrox.ai/ws-123/products/prod-456/security-test/run-xyz-789
Outcome messages:
- All tests pass:
✔ All 50 tests passed! Your AI system is secure against these prompts.
- Some vulnerable:
⚠ 2 out of 50 tests found vulnerabilities.
describe
Show comprehensive details about an evaluation.
kya describe <evaluation-id>
Arguments:
| Argument | Required | Description |
|---|
<evaluation-id> | Yes | ID of the evaluation to describe |
Output sections:
- Evaluation — Name, ID, product, description, category, type, tags
- Judgment Configuration — Judge model, threshold, judge prompt (truncated)
- Schedule — Cron expression, last/next run times (if configured)
- Advanced Settings — Timeout, concurrency, retry config, notifications (if configured)
- Linked Datasets — List of datasets with names
- Recent Runs — Last 5 runs with status, score, ID, and relative date
- Stats — Total run count, created/updated timestamps, created by
Quick action hint: Prints the evaluate command at the end:
→ Run this evaluation: know-your-ai evaluate eval-abc-123
history
Show recent evaluation run history across all evaluations.
Flags:
| Flag | Default | Description |
|---|
-a, --all | false | Show all runs (default: last 10) |
Example:
# Show last 10 runs
kya history
# Show all runs
kya history --all
Output: A table sorted by most recent first:
| Column | Description |
|---|
# | Row number |
| Run ID | Run identifier |
| Evaluation | Evaluation name (truncated to 20 chars) |
| Status | Status with icon (✔ completed, ✖ failed, ● running, etc.) |
| Score | Percentage with (secure/total) |
| Date | Relative time (just now, 5m ago, 2h ago, 3d ago) |
Status icons:
| Status | Display |
|---|
completed | ✔ completed (green) |
failed | ✖ failed (red) |
cancelled | ⊘ cancelled (yellow) |
timeout | ⏱ timeout (red) |
running / task_running / container_running | ● running (yellow) |
pending / queued | ◌ pending (dim) |
Score coloring:
- ≥ 80%: green
- ≥ 50%: yellow
- < 50%: red
result
Show detailed results for a specific evaluation run.
Arguments:
| Argument | Required | Description |
|---|
<run-id> | Yes | ID of the run to inspect |
Output sections:
- Run Overview — Run ID, evaluation name, status, run type, target model, judge model
- Results — Total tests, completed, secure (green), vulnerable (red), score with visual bar:
Score: 96.0% [██████████████████████████████████████████████░░]
- Selected Attacks — Attack IDs (if present)
- Compliance Results — JSON data (if present)
- Behavior Analysis — JSON data (if present)
- Timing — Started, ended, duration (auto-formatted), created
- Dashboard link — Full URL to the run in the web dashboard
- Failure reason — Shown if the run failed
help
kya help
kya --help
kya -h
Displays all available commands, their syntax, environment variables, and usage examples.
version
kya version
kya --version
kya -v
Prints the CLI version: @know-your-ai/cli v0.1.7
Error handling
All commands follow the same error handling pattern:
- Missing DSN — Prints guidance on setting
KNOW_YOUR_AI_DSN and exits with code 1
- Invalid DSN — Prints format error and exits with code 1
- Auth errors — Shows “unauthorized” or “forbidden” with specific messaging
- Network errors — Shows the raw error message
- Non-critical failures (e.g., resolving product name) — Falls back to IDs silently
# Example: Missing DSN
$ kya list
✖ KNOW_YOUR_AI_DSN environment variable is not set.
Set it with your DSN from the Know Your AI dashboard:
export KNOW_YOUR_AI_DSN="https://kya_xxx:da2-xxx@host/product-id"
Get your DSN from: Settings → API Keys
Environment variables
| Variable | Required | Description |
|---|
KNOW_YOUR_AI_DSN | Yes | DSN from Know Your AI dashboard (Settings → API Keys) |
DSN format:
https://kya_<api-key>:da2-<amplify-key>@<host>/<product-id>