The $21 Billion Problem Nobody's Solving: AI Agent Identity

The non-human identity market is exploding to $21B but nobody has solved AI agent identity. Here's why JIT JWTs and pharma-grade audit trails are the answer.

nhiai-securityidentitycomplianceenterprise

Last week, Oasis raised $120M for non-human identity management. The week before, NIST published a concept paper on AI agent identity. Gartner says the NHI market will hit $21 billion in 2026.

Everyone agrees this is a massive problem. Nobody agrees on how to solve it.

I've been deep in this space for the past two months, building agent infrastructure at vaos.sh. Here's what I've learned about the fundamental identity problem facing AI agents — and the approach we're taking to solve it.

The Problem, Simply Stated

Every AI agent needs to interact with external systems. It calls APIs. It reads databases. It pushes code. It sends messages. Each of those interactions requires authentication — the agent needs to prove it's allowed to do what it's doing.

Today, most teams solve this by giving the agent a static API key or a long-lived service account token. This is the equivalent of giving a contractor your house keys, your car keys, and your bank PIN, and saying "just use whatever you need."

Static credentials for AI agents are a security nightmare for three reasons:

They never expire. A service account token created in January is still valid in December. If it leaks — through logs, error messages, a compromised dependency — the attacker has permanent access.

They're over-privileged. Most teams give the agent broad permissions because scoping credentials is tedious. The agent that only needs to read from one database table has write access to the entire cluster.

They're unauditable. When three agents share one service account, you can't tell which agent performed which action. Your audit log says "service-account-prod did X" but not why, not which agent, and not what prompt triggered it.

This is the NHI problem. And it gets worse as agents get more autonomous.

Why Traditional IAM Doesn't Work

Enterprise identity systems were built for humans and services. They assume two things:

  • The identity holder has a stable, long-lived existence (an employee, a microservice)
  • Authentication happens at well-defined boundaries (login, API gateway)
  • AI agents break both assumptions.

    An agent might exist for 60 seconds to handle a single task, then terminate. Another agent might spawn 15 sub-agents, each needing different permissions. A third might run for weeks but need access to different systems at different times depending on the conversation.

    Traditional IAM gives you a binary: authenticated or not. Agents need something more granular — authenticated for this specific action, for this specific duration, triggered by this specific context.

    SAML doesn't do this. OAuth was designed for user-delegated access, not autonomous machine actions. Even SPIFFE/SPIRE, which handles machine identity well, wasn't designed for the ephemeral, context-dependent nature of AI agent work.

    JIT JWTs: 60-Second Ephemeral Credentials

    The approach that actually works is Just-In-Time JSON Web Tokens — credentials that are minted on demand and expire almost immediately.

    Here's how it works:

  • An agent needs to call an external API
  • It requests a credential from the identity broker
  • The broker checks the agent's current context: what task is it performing, what triggered the task, what permissions does the task require
  • If everything checks out, it mints a JWT with a 60-second TTL
  • The agent uses the token, completes the action, and the token expires
  • If the token leaks, it's useless in 61 seconds
  • This is not a new concept in security — short-lived credentials are a best practice in cloud infrastructure. AWS STS has done this for years. But applying it to AI agents requires solving a harder problem: context-aware scoping.

    A human user's permissions are relatively stable. They're an admin or they're not. An agent's required permissions change with every task. The credential system needs to understand what the agent is doing right now and issue the minimum credentials for that specific action.

    This means the identity broker needs to understand agent workflows, not just agent identity. It's not enough to know "this is Agent-47." You need to know "Agent-47 is currently processing a refund for customer #1234, which requires read access to the billing table and write access to the refunds table, and nothing else."

    ALCOA+ Audit Trails: Borrowing from Pharma

    The audit problem is just as critical as the credential problem. When an AI agent takes an action, you need to know:

  • What action was taken
  • When it happened
  • Who (which agent) performed it
  • Why (what triggered the action — which user request, which automated rule)
  • What context the agent had at the time
  • The pharmaceutical industry solved a version of this problem decades ago with ALCOA+ — a framework for data integrity in regulated environments. ALCOA stands for:

  • Attributable — every action traced to a specific actor
  • Legible — records are clear and permanent
  • Contemporaneous — recorded at the time of the action, not after
  • Original — first-hand record, not a copy or summary
  • Accurate — no edits without tracked amendments
  • The "+" adds Complete, Consistent, Enduring, and Available.

    This framework, designed for clinical trial data, maps perfectly to AI agent audit trails. When your agent modifies a customer record, the audit entry should be attributable (which agent), contemporaneous (timestamped at execution, not logged later), original (the actual API call, not a summary), and accurate (immutable).

    Most logging systems fail the "attributable" test immediately. They log "the system did X." ALCOA+ demands "Agent-47, executing task #892, triggered by user request #445, performed action X at timestamp T with credentials C."

    This matters for compliance. It matters for debugging. And it matters for trust — if you can't explain what your agent did and why, you can't deploy it in any regulated industry.

    The NIST Concept Paper

    NIST's recent concept paper on AI agent identity raises the right questions. How do you verify that an AI agent is who it claims to be? How do you scope permissions for an entity that might spawn sub-agents? How do you maintain audit trails when agents act autonomously?

    The paper doesn't prescribe solutions, which is appropriate at this stage. But it signals that regulatory frameworks are coming. If you're building agent infrastructure, the time to think about identity is now — not after the compliance requirements land.

    Two things stood out to me in the paper:

    Agent attestation — the idea that an agent should be able to cryptographically prove its own configuration, model version, and system prompt. This is analogous to remote attestation in hardware security. If a customer asks "what version of your agent am I talking to?" there should be a verifiable answer.

    Delegation chains — when Agent A spawns Agent B to complete a subtask, the permissions and audit trail should flow through the chain. Agent B's actions should be traceable back to Agent A's original task, and Agent A's permissions should constrain what Agent B can do. This is the "sub-agent scoping" problem, and it's genuinely hard.

    Why This Isn't a Solved Problem

    The $120M going to Oasis and the growing NHI market signal demand, not solutions. Most NHI platforms today focus on discovering non-human identities (finding all the service accounts and API keys scattered across your infrastructure) and managing their lifecycle (rotation, expiration, deprovisioning).

    That's necessary work. But it doesn't address the AI-specific challenges:

  • Ephemeral agents that exist for seconds
  • Context-dependent permissions that change with every task
  • Autonomous decision-making that requires real-time audit
  • Sub-agent delegation with permission inheritance
  • Behavioral verification — is the agent doing what it's supposed to?
  • These problems require new primitives, not just better versions of existing IAM tools.

    What We're Building

    At vaos.sh, we're building the infrastructure layer for AI agents that includes identity as a first-class concern. Every VAOS agent gets:

  • Unique identity — not a shared service account, a per-agent cryptographic identity
  • Behavioral audit trails — every action logged with full context: what, when, who, why
  • Persistent memory with provenance — the agent remembers, and you can trace how it learned what it knows
  • Correction tracking — when an agent's behavior is modified, the change is logged and attributable
  • We're not claiming to have solved the full NHI problem. Nobody has. But we believe the foundation is right: ephemeral credentials, context-aware scoping, and pharma-grade audit trails applied to AI agents.

    We submitted a comment on the NIST concept paper. We think the identity layer for AI agents will be as fundamental as TLS was for the web — invisible when it works, catastrophic when it doesn't.

    If you're building with AI agents and thinking about identity, security, or compliance, we'd like to talk. The infrastructure is live at vaos.sh, and the identity layer is what makes everything else trustworthy.

    The AI agent that learns from your corrections

    VAOS remembers across sessions and stops repeating mistakes. 7-day free trial.

    View Plans →