By clicking "Accept", you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in our marketing efforts. More info

5 Identity Gaps That Put AI Agents at Risk

Published
September 2, 2025

Join 14,000+ identity enthusiasts who subscribe to our newsletter for expert insights.

By subscribing you agree to with our Privacy Policy.
Success! You’re now subscribed to the newsletter.
Oops! Something went wrong while submitting the form.

AI agents will soon be booking travel, managing workflows, and making purchases on our behalf. By next year, non-human agents may outnumber human users online.

The problem is, our identity systems were built for people, not for autonomous software.

During our recent “Know Your Agent” live session with Peter Horadan, CEO of Vouched, we went through the five critical identity problems we need to solve before agents become the default way we interact online:

1. Agent Identity

Systems must be able to tell when an agent — not a human — is acting. Every agent needs its own unique, verifiable identity so actions can be tracked and trusted.

2. Delegation

Users should be able to delegate only specific actions to an agent (e.g., “book flights” but not “redeem loyalty points”). These permissions must be granular, explicit, and revocable.

3. Reputation

Just like people, agents will have good or bad reputations. Service providers need ways to assess an agent’s trustworthiness and block those with poor track records.

4. Human Identity Without Real-Time Presence

Humans shouldn’t have to approve every action in real time. Once identity and permissions are established, agents should be able to operate within those boundaries without repeated authentication.

5. Legal Consent

Agents can’t click “I agree” on legal terms. We need mechanisms for humans to give durable consent up front so agents can act within legal frameworks without violating terms.

How MCP-I and Verifiable Credentials Fix This

The Model Context Protocol – Identity (MCP-I) adds the missing identity and trust layer to agent interactions. When an agent connects to a service:

  • It identifies itself with a decentralized identifier (DID) that’s unique and verifiable.
  • The human authorizes the agent through a familiar OAuth-like flow — logging in directly to the service, not through the agent — and grants only the actions they’re comfortable with. These permissions are saved and shared as verifiable credentials.
  • The permissions are durable and revocable so the user stays in control without constant re-authentication.
  • Every action is auditable, allowing services to assess and update the agent’s reputation in real time.
  • Legal terms are agreed to once by the human during setup, solving the “checkbox” problem for ongoing agent actions.

Verifiable credentials make this even stronger.

Instead of sharing usernames or passwords, users can issue their agents cryptographically signed credentials that are scoped to specific tasks, auditable for compliance, and revocable at any time. This eliminates password sharing, prevents session hijacking, and ensures every action can be tied to both the human and the authorized agent.

The result: a secure, privacy-preserving foundation that allows AI agents to work for us, without compromising trust.

Create your first digital ID credential today

The Truvera platform helps you integrate reusable ID credentials into your existing identity workflows to support a variety of goals: reduce onboarding friction, connect siloed data, verify trusted organizations and customers, and monetize credential verification.