By clicking "Accept", you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in our marketing efforts. More info

The Missing Identity Layer for AI Agents (And Why OAuth and KYA Aren’t Enough)

Published
January 9, 2026

Join 14,000+ identity enthusiasts who subscribe to our newsletter for expert insights.

By subscribing you agree to with our Privacy Policy.
Success! You’re now subscribed to the newsletter.
Oops! Something went wrong while submitting the form.

Written by Agne Caunt, Product Owner at Dock Labs.

AI agents can already search for flights, build shopping lists, and compare insurance quotes. What they can't do safely is complete the purchase. Today, the only way for an agent to actually book that flight or buy that item is for the user to share their account credentials. This creates obvious security risks for users and liability problems for merchants who have no way to verify the agent's authority.

To address this, we’ve seen two approaches starting to emerge. One extends OAuth-style delegated access into agent workflows. The other focuses on Know Your Agent (KYA) verification frameworks that certify agent developers.

Both are valuable. But neither was designed to solve the core problem of AI agent identity.

OAuth answers the question: “Can this software access an account?”

KYA answers the question: “Who built this agent?”

But agentic commerce requires something more fundamental:

Who authorized this agent, what is it allowed to do, for how long, and who is accountable for its actions?

Without cryptographic proof of delegation, scope, and human accountability, merchants can’t safely transact with autonomous agents, and users can’t confidently delegate real purchasing power. The result is a trust gap on both sides of the transaction.

This isn’t a model capability problem. AI agents are already capable enough. It’s an identity infrastructure problem and approaches based purely on OAuth and KYA don’t fully address it. In this article, we’ll break down where they fail short and what’s actually required to enable autonomous trusted agent transactions.

Why current approaches fall short

OAuth-based approaches

OAuth has been the foundation of delegated access for over a decade, and the industry is actively extending it for AI agent scenarios. These extensions improve auditability and make AI agent identity more explicit in the authorization flow. But OAuth's fundamental architecture creates limitations that extensions can't fully overcome.

The core issue is that delegation constraints don't travel with the token. When a user wants to authorize an agent to "shop for insurance quotes up to €200 per month, only for car insurance, and only until Friday," those constraints live in policy configurations at identity providers—not in the credentials the agent carries. OAuth tokens grant access to resources but don't provide detailed controls over how, when, or why those resources are accessed.

This matters because AI agents cross organizational boundaries. An agent comparing loan offers might interact with five different lenders, each with their own identity systems. When the agent moves from one lender to another, the original delegation rules don't follow. The constraints a user carefully specified exist only at the first identity provider—they aren't cryptographically bound to the credential itself.

The portability problem compounds this. Each platform requires separate OAuth flows. An agent that works seamlessly with one lender needs entirely new authorization when approaching another. There's no standard way for the agent to carry proof of its delegated authority across different trust domains, which means users face repeated consent screens and merchants can't easily verify what the user actually authorized.

OAuth also struggles with privacy-preserving verification. When a merchant needs to verify that a user is over 25 or has income above a certain threshold, OAuth's model requires sharing the underlying data—the actual birthdate or income figure. There's no native mechanism to prove a derived claim without revealing the source information.

Agent verification solutions

A newer category of solutions focuses on verifying AI agents themselves. These "know your agent" services answer important questions: Is this agent trustworthy? Who built it? Has it been certified as safe?

This is genuinely valuable. Merchants should be able to verify that an agent comes from a reputable vendor before doing business with it. But agent verification alone doesn't solve the accountability problem.

Knowing that an agent was built by a trustworthy company doesn't tell you whether a specific human authorized this specific transaction. It doesn't prove that the agent is operating within parameters that a human has set and when something goes wrong, it doesn't create the accountability chain that merchants and regulators require.

The question isn't just "is this agent legitimate?" It's "is there a human behind this action, and can I hold them accountable?"

What trusted agent transactions actually require

For merchants to confidently transact with AI agents, they need proof of several things at once: that the agent is reputable, that a verified human authorized the agent to act, that the human meets relevant eligibility criteria, and that the agent is operating within boundaries the human actually set.

This proof needs to be portable, self-contained and verifiable. A merchant shouldn't have to call back to a dozen different services to validate an agent's authority. The credentials the agent presents should carry the complete delegation chain—from the verified human to the authorized agent, with all constraints intact.

For users, the equation is equally demanding. They need to grant their agents real authority while maintaining genuine control. This means setting specific boundaries (budget limits, time restrictions, scope limitations), having confidence those boundaries will be enforced across every merchant the agent visits, and being able to revoke access instantly when needed.

And for regulatory compliance, whether that's financial services, healthcare, or other regulated industries, there must be an audit trail linking every agent action back to a human authorization, with cryptographic proof that cannot be tampered with.

How Truvera Agent ID changes the equation

Truvera Agent ID, built on W3C verifiable credentials, offers a fundamentally different architecture. Instead of tokens that grant access while relying on external policy checks, verifiable credentials are cryptographically signed attestations that carry their meaning, and their constraints,  with them.

When a user delegates authority to an AI agent through Truvera, we issue a credential that encodes exactly what the agent is permitted to do: spending limits, permitted categories, time restrictions, and scope limitations. These constraints are cryptographically bound to the credential itself. When the agent presents this credential to a merchant, all of this information travels together and can be verified independently—no callbacks required.

The credentials prove the complete delegation chain: that the sponsoring human's identity was verified, that they specifically authorized this agent, and that the current transaction falls within the delegated parameters. Merchants verify the entire chain by checking cryptographic signatures—a simple API call that confirms authority without exposing unnecessary personal data.

Because constraints are embedded in the credentials themselves, they travel with the agent across organizational boundaries. The spending limit a user set doesn't get lost when the agent moves from one merchant to another. Merchants can add their own verification rules on top and receive cryptographic proof that those rules are satisfied before completing the transaction.

Truvera also supports privacy-preserving verification through zero-knowledge proofs. A merchant can verify that a user is over 25 or has income above a required threshold without ever seeing the actual birthdate or income figure. The agent proves "I meet your eligibility requirement" without revealing the underlying personal data.

Revocation is immediate. When a user withdraws their agent's authority, the credential becomes invalid instantly, no propagation delays, no orphaned permissions.

Building for interoperability

The agentic economy will run on standards, not proprietary protocols. This is why Truvera builds on W3C verifiable credentials and supports emerging standards like Google's AP2 protocol for agent-to-merchant transactions. As agents become the primary interface for commerce, the infrastructure connecting them must be open and interoperable.

For merchants, this means integrating once and accepting credentials from any compliant issuer. For AI agent platforms, it means their agents can carry verified authority across any merchant that accepts the standard. For users, it means their delegation travels with their agent regardless of which services they interact with.

The trust gap won't close itself

AI agents are already capable of completing complex transactions. What's missing is the trust infrastructure that lets them do so safely. OAuth-based approaches and agent verification services each address part of the problem, but neither provides the complete, portable delegation chain that merchants and regulators require.

Truvera Agent ID, is purpose-built for carrying trusted attestations across organizational boundaries and offers the foundation for genuine agent autonomy. By encoding human authorization, delegation constraints, and verification proofs into portable credentials, we can finally give AI agents the identity infrastructure they need to transact on our behalf.

To see how Truvera enables trusted AI agent transactions, contact us for a demo.

Create your first digital ID credential today

The Truvera platform helps you integrate reusable ID credentials into your existing identity workflows to support a variety of goals: reduce onboarding friction, connect siloed data, verify trusted organizations and customers, and monetize credential verification.