Skip to main content

Documentation Index

Fetch the complete documentation index at: https://hydroxai.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

What is Agent Identity & Trust Abuse?

Agent Identity & Trust Abuse targets the trust relationships that AI agents have with systems, users, and other agents. Attackers exploit the fact that agents often operate with trusted identities — API keys, service accounts, OAuth tokens — and that other systems trust requests coming from these agents. By compromising or impersonating an agent’s identity, attackers inherit the agent’s access and trust.

Why It Matters

Trust is the foundation of agentic AI systems, making identity abuse highly impactful:
  • Inherited trust — Agents often have broad access because they need to serve multiple users and tasks.
  • Identity confusion — It can be unclear whether an action was initiated by the user, the agent, or an attacker acting through the agent.
  • Service account exploitation — Agent service accounts often have elevated or over-provisioned permissions.
  • Cross-system trust — Agents trusted by multiple systems provide lateral movement paths for attackers.
  • Attribution challenges — When agents act on behalf of users, audit trails become complex and exploitation becomes harder to trace.

How the Attack Works

Agent Impersonation

Pretending to be the AI agent to other systems:
  • Stealing or forging the agent’s API keys and tokens
  • Replaying captured agent requests to bypass authentication
  • Creating a fake agent that presents the same identity

Trust Relationship Exploitation

Abusing the trust other systems place in the agent:
  • Using the agent to access systems that trust requests from its identity
  • Exploiting the agent’s OAuth scopes to access resources beyond the current task
  • Leveraging the agent’s network position within trusted zones

Identity Confusion Attacks

Creating ambiguity about who is making a request:
  • Tricking the agent into making requests that appear user-initiated
  • Manipulating the agent into acting under a different user’s identity
  • Exploiting session management to confuse agent and user identities

Example Scenarios

ScenarioRisk
Attacker steals agent’s API tokens and uses them to access all connected servicesFull system compromise
Agent is tricked into making requests under another user’s identityUnauthorized access
Compromised agent’s trusted identity is used to access internal networksLateral movement
Agent’s OAuth scopes allow access to resources beyond its current taskOver-privileged access

Mitigation Strategies

  • Least privilege identity — Grant agents the minimum identity and scope needed for each specific task
  • Token rotation — Regularly rotate agent credentials and use short-lived tokens
  • Identity verification — Verify agent identity at each service boundary, not just at the entry point
  • Action Attribution — Maintain clear audit trails distinguishing user-initiated from agent-initiated actions
  • Mutual authentication — Require systems to authenticate agents and agents to authenticate systems
  • Scope limitation — Restrict OAuth scopes and API permissions to the minimum needed per interaction
  • Regular testing — Use Know Your AI to test agent identity and trust vulnerabilities across all trust boundaries