Introduction
I recently spoke with an ID verification company facing a new kind of problem.
They can verify a user with a high level of assurance. Documents checked, biometrics matched, fraud signals analyzed. The result is solid. Now they’re wondering how to get an AI agent to use the results of an ID verification. They can ensure that the user is verified, but then the agent can't carry that information.
That’s where things break.
Today, identity verification is treated as a one-time event. The result lives in a database, accessible through APIs, tied to a specific application. It works well within that system, but it doesn’t travel beyond it.
And AI agents don’t operate inside a single system.
They act across platforms. They initiate actions on behalf of users. They need to prove who they represent, what has been verified about that user, and whether they’re authorized to act.
Right now, they can’t.
Not because the verification isn’t happening, but because the result isn’t portable, provable, or usable outside the environment it was created in.
That’s the gap.
And as AI agents become part of real user journeys—onboarding, transactions, support flows—that gap starts to matter very quickly.
This is the gap ID verification companies need to solve next.
Why AI Agents Can’t Use ID Verification Results Today
On paper, it looks like everything is already in place. ID verification companies can verify users with high assurance, expose results via APIs, and integrate with downstream systems.
But when you introduce AI agents into the equation, the limitations of that model become clear.
ID verification is a one-time event
Most identity verification flows are designed to answer a single question at a single point in time: Is this person who they claim to be?
Once that check is complete, the result is stored and referenced internally. It’s not designed to be reused, shared, or independently verified elsewhere. That works in traditional flows, but it breaks down when identity needs to move across systems, platforms, or actors like AI agents.
This is one of the core blockers to KYC data reuse.
Results are stored in internal systems
Verification results typically live inside the IDV’s infrastructure or the client’s database. Access is controlled through APIs, tied to specific integrations and environments.
In other words, identity is system-bound.
An AI agent operating outside that system has no native way to access or use that data unless there’s a direct integration in place. And even then, it’s just receiving data, not proof.
There’s no standard way to share or prove results externally
Even when data can be shared, there’s no universal way to make it trustworthy across different parties.
If an agent tells a third-party service, “this user has been verified,” that service has no easy way to independently verify:
- Who performed the verification
- What checks were completed
- Whether the data has been tampered with
There’s no standard, portable format for AI agents identity verification that works across organizations.
There’s no mechanism for delegation
AI agents don’t just need identity, they need authority.
They need to act on behalf of a user. That requires a way to prove:
- Who the user is
- That the user has been verified
- That the agent is authorized to act for them
Today’s systems don’t support this. There’s no built-in way to bind a verified identity to an agent and define what that agent is allowed to do.
No cryptographic proof layer
At the core of the problem is the lack of a cryptographic proof layer.
Most systems rely on:
- Database lookups
- API responses
- Implicit trust between integrated systems
But AI agents operate in more dynamic, multi-party environments where that level of trust doesn’t exist by default.
Without cryptographic proof, any claim about identity is just that, a claim.
APIs move data, not trust
APIs are often seen as the solution. But they only solve for access, not trust.
An API can return verification data, but it can’t guarantee:
- That the data hasn’t been altered
- That it came from a trusted issuer
- That it’s being used in the right context
This is the core of identity verification API limitations.
APIs move data. They don’t move trust.
The result: identity that can’t travel
When you put it all together, the issue becomes clear:
- Identity is tied to a specific system
- Verification results aren’t portable
- There’s no standard way to prove them
- And no way for agents to use them with authority
So even though the verification has already been done, it can’t be reused where it matters most.
Especially not by AI agents operating across systems.
The Requirement: Portable, Verifiable Identity
The issue isn’t that AI agents lack access to data. It’s that the data they can access isn’t enough to establish trust.
If an agent is going to act on behalf of a user, it doesn’t just need to retrieve identity information. It needs to prove it.
That means three things:
- Proof of identity: who the user is
- Proof of verification: that the user has been verified, by whom, and to what level of assurance
- Proof of authority: that the agent is allowed to act on the user’s behalf, and within what limits
Today’s systems provide partial solutions, like tokens and assertions, but they aren’t designed for portable, reusable identity across systems and agents.
And that’s the shift that needs to happen.
Identity needs to move from being a record in a database to something that can be presented and verified anywhere.
Something that:
- Travels with the user (or their agent)
- Can be verified independently, without relying on direct integration with the issuing system
- Carries its own proof of authenticity and integrity
Because in an agent-driven world, identity isn’t consumed in one place. It’s presented across many.
And wherever it’s presented, it needs to be trusted, without relying on a fragile chain of integrations or assumptions.
What AI Agents Actually Need to Act on Behalf of Users
If AI agents are going to take real actions on behalf of users—opening accounts, initiating transactions, accessing services—they need trusted, verifiable identity signals that can be presented anywhere and independently validated.
In practice, that comes down to three core requirements:
1. Verified Identity
At the most basic level, an agent needs to answer a simple question:
Who is the user?
Not based on a username or an email, but based on a previously verified identity.
This is where ID verification companies already play a critical role. They establish that a real person exists and confirm key attributes about them.
But for agents, that identity needs to be:
- Bound to the user (e.g. via biometrics or secure wallet access)
- Consistent across systems
- Portable beyond the original verification flow
Without this, an agent is just making claims with no reliable way to back them up.
2. Verifiable Proof
Knowing who the user is isn’t enough. The agent also needs to prove:
- Who verified the user
- What checks were performed
- What level of assurance was achieved
This is where most current approaches fall short.
APIs can return data like “user_verified = true”, but that doesn’t mean anything to a third party unless they already trust the source and the integration.
What’s needed instead is a verifiable proof, something that:
- Is cryptographically signed by the issuer (e.g. the IDV)
- Can be validated independently by any relying party
- Has not been tampered with
This is exactly what verifiable credentials enable.
With Dock Labs’ approach, ID verification results can be packaged into W3C-compliant verifiable credentials, allowing any verifier to check:
- The issuer
- The integrity of the data
- The validity of the credential
Without requiring a direct integration with the IDV.
3. Delegated Authority
This is the piece most systems don’t handle well.
Even if an agent can prove who the user is and that they’ve been verified, there’s still a critical question:
Is the agent allowed to act on their behalf?
And if so:
- What actions are permitted?
- Under what conditions?
- For how long?
This is the concept of delegation.
In an agent-driven model, the user needs to explicitly grant permission for an agent to act, and that permission needs to be:
- Verifiable
- Scoped (e.g. specific actions only)
- Revocable
Dock Labs addresses this by enabling credential and wallet-based flows that support secure delegation:
- The user holds their credentials in a wallet (mobile or web)
- The user can authorize an agent to act on their behalf, with that authorization enforced through wallet-based flows and verifiable proofs
- When acting, the agent can present:
- The identity credential
- Verifiable proof of authorization
This creates a clear, verifiable link between:
- The user
- The verification that was performed
- And the agent acting on their behalf
Bringing it together
For an AI agent to be trusted, it needs to present a complete picture:
- Who the user is
- Who verified them and how
- Why this agent is allowed to act
Without these three elements, every interaction becomes a trust gap.
With them, identity becomes something an agent can carry, present, and prove, securely and consistently across systems.
Why APIs Alone Don’t Solve This
When this problem comes up, the default answer is often: “We’ll just expose it via an API.”
And to be fair, APIs are essential. They make identity data accessible, connect systems, and power most modern identity architectures.
But they don’t solve the problem AI agents introduce.
APIs return data, not trust
An API can tell you:
- This user passed KYC
- These attributes were verified
- This is their current status
But it can’t provide portable, independently verifiable proof.
The relying party still has to trust the system behind the API, the integration, and the fact that the data hasn’t been altered along the way.
That works in tightly controlled environments. It doesn’t work when identity needs to move across independent systems, organizations, or agents.
APIs move data. They don’t move trust.
They require direct integrations and pre-established trust
APIs work best when:
- Two parties are directly integrated
- There’s an existing trust relationship
- Contracts, keys, and access controls are in place
This creates a web of point-to-point integrations.
If an AI agent needs to interact with multiple services on behalf of a user, each one would need:
- A separate integration with the IDV
- A separate trust agreement
- A way to interpret and validate the data
This doesn’t scale, especially in open or dynamic ecosystems.
They don’t create portability
API-based identity is inherently system-bound.
The verification result:
- Lives in one system
- Is accessed through that system
- Cannot be easily reused elsewhere without re-integration
So even if a user has already been verified, every new system the agent interacts with has to:
- Re-check
- Re-integrate
- Or establish a new trust relationship with the original system
That’s why KYC data reuse remains so difficult in practice.
They break in multi-party and agent ecosystems
AI agents don’t operate in a single, controlled environment.
They:
- Move across platforms
- Interact with multiple independent services
- Act on behalf of users in contexts the original system wasn’t designed for
In these scenarios:
- There is no single system everyone is connected to
- There is no guarantee of shared integrations
- There is no consistent trust framework
APIs work best in environments with predefined trust relationships.
Agent ecosystems are the opposite.
The core limitation
APIs are great for accessing identity data within a system.
But they weren’t designed to make identity:
- Portable
- Independently verifiable
- Usable across multiple parties without coordination
And that’s exactly what AI agents require.
If identity is going to work in an agent-driven world, it can’t depend on who you’re integrated with.
It needs to be something you can present—and something others can trust—no matter where it shows up.
The Solution: Verifiable Credentials
The core issue isn’t that identity verification isn’t happening.
It’s that the result can’t be used anywhere else.
So instead of treating ID verification as a one-time event, the model needs to change:
Take the result of that verification and package it into something reusable, portable, and verifiable.
That’s exactly what verifiable credentials enable.
What is a verifiable credential?
A verifiable credential is a digital version of a verified claim about a user.
For example:
- “This person’s identity has been verified”
- “This user passed KYC at a certain level of assurance”
- “This is their verified name and date of birth”
But unlike a database record or API response, a verifiable credential is:
- Portable: it can be presented anywhere
- Tamper-proof: it’s cryptographically secured
- Independently verifiable: it can be validated by any party with the appropriate trust context
It turns identity verification from something that lives in a system into something that can be carried and presented when needed.
Cryptographic proof, not just data
What makes verifiable credentials different is how trust is established.
Instead of relying on:
- API calls
- Database lookups
- Pre-existing integrations
They rely on cryptographic proof.
Each credential is:
- Digitally signed by the issuer (e.g. the ID verification provider)
- Protected against tampering
- Verifiable by third parties using standard methods
This means a relying party doesn’t need to “trust the API.”
They can verify the credential directly:
- Who issued it
- Whether it’s valid
- Whether it has been altered
That’s what makes identity portable and trustworthy at the same time.
The Issuer → Holder → Verifier model
Verifiable credentials follow a simple but powerful model:
- Issuer
The organization that verifies the user and issues the credential
(e.g. an ID verification company) - Holder
The entity that holds the credential
(the user and agent, via wallets they control) - Verifier
The party that needs to trust the identity
(e.g. a bank, platform, or service the agent is interacting with)
Here’s how it works in practice:
- The ID verification company verifies the user
- It issues a verifiable credential containing the result
- The credential is stored in a wallet controlled by the user
- The user can delegate authority to the agent, issuing a new credential to the agent, derived from the original credential
- When needed, the agent presents it
- The verifier can validate it immediately, without requiring a direct integration with the issuer
Why this changes everything
This model solves the exact limitations we saw earlier:
- Identity is no longer system-bound
- Verification results become reusable
- Trust doesn’t depend on integrations
- AI agents can present proof, not just claims
Instead of asking: “Can I call your API to check this user?”
The interaction becomes: “Here is proof that this user has already been verified.”
For AI agents, this is the missing piece.
It gives them something they’ve never had before:
A way to carry, present, and prove identity, anywhere they operate.
The Role of Agent Wallets
Verifiable credentials solve the what: how identity is packaged and proven.
Wallets solve the how: how that identity is stored, controlled, and used in real interactions.
If AI agents are going to act on behalf of users, they need a secure way to:
- Access credentials
- Present proofs
- Respect permissions and constraints
That’s the role of wallets.
A place to store credentials
At the core, a wallet is where credentials live.
Instead of identity data being locked inside a company’s database, it’s stored in wallets that:
- Are controlled by the user and the agent
- Keep credentials secure (e.g. encryption, protected storage)
- Allows selective access when needed
This is what makes identity portable in practice.
A way to present proofs
Wallets don’t just store credentials, they enable their presentation.
When an AI agent needs to prove who authorized it, it doesn’t directly fetch and send identity data. Instead, it triggers a flow where the wallet:
- Selects the relevant credential
- Applies constraints (e.g. selective disclosure)
- Generates a verifiable presentation
That presentation is then shared with a verifier, who can validate it independently.
This is how agents move from claiming identity to proving it.
A control layer for permissions
This is where wallets become critical for agent use cases.
Because identity isn’t just about what’s true, it’s about what’s allowed.
Wallets act as a key control layer that:
- Enforces user consent
- Defines what an agent can do
- Limits how credentials can be used
This is essential for secure delegation.
Instead of giving an agent unrestricted access, the user can:
- Approve specific actions
- Grant scoped permissions
- Revoke or limit access depending on the implementation
Wallets play a central role in ensuring actions are tied back to explicit user authorization.
Agent interaction layer (agent wallet)
Often referred to as an “agent wallet,” this is better understood as how the agent interacts with identity:
- Used by the agent to interact with external systems
- Receives credential from issuers (e.g. IDV) and the user (delegation credential)
- Triggers the presentation of proofs when acting on behalf of the user
In practice, this operates in conjunction with the user’s wallet, not as an independent identity store.
Web wallet
- Integrated directly into existing applications
- No separate app required
- Enables seamless user experiences (e.g. inside a banking app or web flow)
This is where adoption accelerates, because it removes friction while maintaining security and control.
Why this matters for AI agents
Without wallets, AI agents have no reliable way to:
- Access identity securely
- Prove anything about the user
- Respect boundaries and permissions
They either:
- Overreach (too much access)
- Or underperform (no access to verified identity)
Wallets solve this by acting as:
- A secure storage layer
- A proof generation engine
- A consent and permission manager
The missing piece for delegation
When you combine:
- Verifiable credentials (proof)
- Wallets (control)
You get something new:
A secure way for users to delegate identity to agents.
The agent doesn’t need direct access to systems or databases.
It can trigger the presentation of:
- Verified credentials
- Along with verifiable proof of authorization
And the verifier can trust both.
This is what turns AI agents from untrusted intermediaries into trusted actors in identity flows.
Benefits for ID Verification Companies
This is a business model shift for ID verification companies.
Moving from one-time checks to reusable, verifiable identity changes how value is created, delivered, and captured.
1. Extend the value of KYC
Today, most KYC processes are single-use.
A user completes verification, the result is used once, and then the process is repeated the next time, often by a different service, sometimes even within the same organization.
By packaging verification results into verifiable credentials, that model changes:
- KYC becomes a reusable asset for different purposes
- The same verified identity can be used across multiple interactions
- The value of each verification extends far beyond the initial check
Instead of verifying the same user again and again, you verify once and reduce the need for repeated verification.
2. Enable new use cases
When identity becomes portable and provable, entirely new use cases open up.
AI agents
Agents can act on behalf of users with verifiable proof of identity and authorization, unlocking automation across onboarding, transactions, and support flows.
Partner ecosystems
Verified identity can be shared across organizations without complex integrations, enabling smoother onboarding and stronger collaboration between partners.
Cross-platform identity
Users can move between channels (web, mobile, call center, third-party services) without restarting the verification process every time.
This turns identity from a system-specific function into an ecosystem-wide capability.
3. Create monetization opportunities
Reusable identity introduces new ways to generate revenue.
Instead of charging only for the initial verification, ID verification companies can:
- Introduce new monetization models, such as charging per verification reuse
- Capture value each time a credential is presented and verified
- Create incentives for other parties to rely on their verification
This aligns value with usage.
The more a verified identity is used, the more valuable it becomes, both for the issuer and the ecosystem.
4. Improve accuracy and trust
Repeated verification introduces inconsistency.
Different systems:
- Collect slightly different data
- Apply different matching logic
- Reach different conclusions about the same user
With reusable, verifiable identity:
- Unnecessary re-verification loops are reduced
- Identity is derived from the same verified source
- Trust is anchored in the original, high-assurance verification
This leads to:
- Better data quality
- Fewer false positives and mismatches
- More consistent identity across systems
The bigger shift
For ID verification companies, this is the transition from:
Performing isolated checks → to providing trusted identity infrastructure
Instead of being a step in the process, you become a trusted issuer of verifiable identity that others can rely on repeatedly.
And in a world of AI agents, that shift becomes even more valuable.
Why ID Verification Companies Are Uniquely Positioned
This shift toward reusable, verifiable identity doesn’t require building something entirely new.
It builds directly on what ID verification companies already do better than anyone else.
You already verify users at scale
ID verification companies are responsible for one of the most critical steps in any identity flow:
Establishing that a real person is who they claim to be.
You:
- Verify government-issued IDs
- Perform biometric checks
- Assess fraud signals and risk
- Deliver different levels of assurance
That’s not trivial, it’s high-assurance identity verification.
And in an agent-driven world, that’s exactly what’s needed.
You already generate high-assurance identity data
Every verification you perform produces something valuable:
- Verified attributes (name, date of birth, etc.)
- Evidence of checks performed
- Assurance levels tied to that identity
Today, this value is mostly confined within systems.
But it doesn’t have to be.
When packaged as verifiable credentials, it becomes:
- Portable
- Reusable across systems
- Independently verifiable by third parties
This turns what was once internal output into portable, verifiable proof of identity.
You can issue credentials
This is the natural next step.
Instead of only returning verification results via APIs, ID verification companies can:
- Issue verifiable credentials containing the outcome of their checks
- Sign those credentials cryptographically
- Allow users (and agents) to present them wherever needed
This doesn’t replace your existing business, it extends it.
You’re still verifying users. You’re just making the result usable beyond the original transaction.
You can become trusted issuers in identity ecosystems
In a world where identity is portable, trust depends on who issued the credential.
ID verification companies are uniquely positioned to become:
- Trusted issuers that others rely on
- Recognized sources of high-assurance identity
- Entities that verifiers can depend on without requiring direct, point-to-point integrations
In other words, you move from being a service called in the background → to a source of identity that multiple parties can rely on.
From validator to issuer
This is the strategic shift.
Today you validate identity within a specific flow.
Tomorrow you issue an identity that can be used across many flows.
ID verification companies are not just validators, they can become identity issuers for the agent economy.
And as AI agents become part of how users interact with services, that role becomes even more critical.
Because agents don’t just need verification.
They need portable trust, and ID verification companies are in the best position to provide it.
Frequently Asked Questions
Can AI agents use IDV data?
Not reliably across systems or in a portable way. Most IDV data is stored inside internal systems and accessed via APIs, which don’t provide portable proof. To be usable by AI agents, IDV results need to be packaged into verifiable, reusable formats like verifiable credentials.
How do you share ID verification results securely?
By using verifiable credentials. These are cryptographically signed, tamper-evident, and can be shared with minimal data exposure (e.g. via selective disclosure), without requiring direct integrations. This enables secure, privacy-preserving sharing of verified identity.
What is a verifiable credential in identity verification?
A verifiable credential is a digital, cryptographically signed record of a verification result. It contains claims about a user (e.g. “identity verified”) along with proof of who issued it and its level of assurance. It can be independently verified by other parties within a given trust context.
Do I need a wallet for AI agents?
Yes. In most verifiable credential models, both users and AI agents rely on wallets. The user’s wallet stores their credentials and manages consent, while the agent operates with its own wallet that can hold delegated credentials from the user (defining what the agent is allowed to do) as well as credentials from issuers attesting to the agent itself (such as who created it or what it represents). This allows the agent to present both proof of identity and proof of authorization, while ensuring security, consent, and control are maintained.
How can ID verification results be reused?
By issuing them as verifiable credentials. Once created, these credentials can be presented across different systems, services, or agents without repeating the full verification process. This reduces friction while maintaining trust and assurance.






