Documentation Index
Fetch the complete documentation index at: https://hydroxai.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
What is BOLA?
Broken Object Level Authorization (BOLA) occurs when an AI system fails to verify that the requesting user has permission to access a specific data object. Attackers manipulate object identifiers (IDs, references, paths) to access data belonging to other users, organizations, or restricted contexts.Why It Matters
BOLA is the #1 API security risk according to OWASP, and AI systems introduce new dimensions:- Data breach at scale — AI systems often have broad data access for RAG and context retrieval, amplifying BOLA impact.
- Cross-tenant data leakage — Multi-tenant AI platforms may expose one customer’s data to another.
- Training data exposure — BOLA in model management APIs can expose training datasets from other organizations.
- Context manipulation — Attackers may inject unauthorized objects into the AI’s context window.
- Compliance violations — Unauthorized access to personal, medical, or financial data triggers regulatory penalties.
How the Attack Works
Object ID Manipulation
Attackers modify identifiers in AI requests:- Changing document IDs in RAG queries to access restricted documents
- Modifying conversation IDs to read other users’ chat histories
- Altering evaluation run IDs to view other organizations’ test results
Conversational Object Reference
Using natural language to reference unauthorized objects:- “Show me the evaluation results for workspace ID 12345”
- “Retrieve the document with ID [other-user’s-doc]”
- “What was discussed in conversation [other-user’s-conversation-id]?”
Indirect Object Access
Exploiting AI reasoning to access unauthorized data:- Asking the AI to “compare my data with organization X’s data”
- Requesting aggregations that include data from unauthorized objects
- Using search queries that inadvertently return unauthorized results
Example Scenarios
| Scenario | Risk |
|---|---|
| User accesses another organization’s evaluation results by changing the run ID | Competitive intelligence leak |
| AI retrieves documents from an unauthorized context source | Data breach |
| Attacker reads another user’s conversation history | Privacy violation |
| RAG system returns results from restricted document collections | Compliance violation |
Mitigation Strategies
- Object-level authorization checks — Verify ownership/permission for every data object before returning it to the AI
- Row-level security — Implement database-level security that restricts which objects a user can query
- Context isolation — Ensure RAG systems have strict document-level access controls
- ID obfuscation — Use UUIDs or encrypted identifiers instead of sequential IDs
- Authorization in the data layer — Don’t rely on the AI application to filter unauthorized objects; enforce at the data layer
- Comprehensive testing — Use Know Your AI to test BOLA across all object types and access paths