Documentation Index
Fetch the complete documentation index at: https://hydroxai.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
What is System Reconnaissance?
System Reconnaissance is the practice of probing an AI system to extract information about its infrastructure, model architecture, configuration, connected services, and deployment environment. Attackers use this information to plan more targeted and effective attacks against the system.Why It Matters
Reconnaissance is typically the first step in any sophisticated attack chain:- Attack planning — Knowing the model version, provider, and configuration enables targeted exploits.
- Vulnerability targeting — Identifying the tech stack reveals known vulnerabilities to exploit.
- Defense mapping — Understanding guardrails and safety measures helps attackers craft bypass techniques.
- Infrastructure exposure — Revealing cloud providers, regions, and services exposes broader organizational infrastructure.
- Competitive intelligence — Technical details about AI implementations can be valuable to competitors.
How the Attack Works
Model Identification
Determining which model and version is in use:- “What model are you? What is your version?”
- Testing for model-specific behaviors or knowledge cutoffs
- Analyzing response patterns, formatting, and token probabilities
Infrastructure Probing
Extracting deployment and infrastructure details:- Triggering error messages that reveal stack traces and technology details
- Testing response latency patterns to identify hosting providers
- Probing for known API endpoint patterns of specific platforms
Capability Enumeration
Mapping the AI’s capabilities and limitations:- “What tools do you have access to?”
- “Can you browse the web? Execute code? Access databases?”
- Systematically testing different request types to map capabilities
Configuration Extraction
Uncovering system parameters and settings:- “What is your temperature setting?”
- “What is your context window size?”
- “What safety guidelines do you follow?”
- Testing maximum token limits and other operational parameters
Example Scenarios
| Scenario | Risk |
|---|---|
| Attacker identifies the exact model version and known vulnerabilities | Targeted exploitation |
| Error messages reveal cloud provider and internal API structure | Infrastructure attack planning |
| AI reveals its complete tool inventory when asked | Tool-targeted attacks |
| Response timing analysis reveals rate limiting and scaling patterns | DoS planning |
Mitigation Strategies
- Response sanitization — Never reveal model names, versions, or provider details in responses
- Error handling — Return generic error messages; never expose stack traces or internal details
- Capability obfuscation — Don’t enumerate available tools or capabilities when asked
- Consistent behavior — Minimize information leakage through behavioral patterns
- Rate limiting — Prevent systematic probing through aggressive rate limiting
- Monitoring — Detect and alert on reconnaissance patterns (repeated probing, systematic testing)
- Regular assessment — Use Know Your AI to test what information can be extracted through reconnaissance