AI agents will soon be booking travel, managing workflows, and making purchases on our behalf. By next year, non-human agents may outnumber human users online.
The problem is, our identity systems were built for people, not for autonomous software.
During our recent “Know Your Agent” live session with Peter Horadan, CEO of Vouched, we went through the five critical identity problems we need to solve before agents become the default way we interact online:
1. Agent Identity
Systems must be able to tell when an agent — not a human — is acting. Every agent needs its own unique, verifiable identity so actions can be tracked and trusted.
2. Delegation
Users should be able to delegate only specific actions to an agent (e.g., “book flights” but not “redeem loyalty points”). These permissions must be granular, explicit, and revocable.
3. Reputation
Just like people, agents will have good or bad reputations. Service providers need ways to assess an agent’s trustworthiness and block those with poor track records.
4. Human Identity Without Real-Time Presence
Humans shouldn’t have to approve every action in real time. Once identity and permissions are established, agents should be able to operate within those boundaries without repeated authentication.
5. Legal Consent
Agents can’t click “I agree” on legal terms. We need mechanisms for humans to give durable consent up front so agents can act within legal frameworks without violating terms.
How MCP-I and Verifiable Credentials Fix This
The Model Context Protocol – Identity (MCP-I) adds the missing identity and trust layer to agent interactions. When an agent connects to a service:
- It identifies itself with a decentralized identifier (DID) that’s unique and verifiable.
- The human authorizes the agent through a familiar OAuth-like flow — logging in directly to the service, not through the agent — and grants only the actions they’re comfortable with. These permissions are saved and shared as verifiable credentials.
- The permissions are durable and revocable so the user stays in control without constant re-authentication.
- Every action is auditable, allowing services to assess and update the agent’s reputation in real time.
- Legal terms are agreed to once by the human during setup, solving the “checkbox” problem for ongoing agent actions.
Verifiable credentials make this even stronger.
Instead of sharing usernames or passwords, users can issue their agents cryptographically signed credentials that are scoped to specific tasks, auditable for compliance, and revocable at any time. This eliminates password sharing, prevents session hijacking, and ensures every action can be tied to both the human and the authorized agent.
The result: a secure, privacy-preserving foundation that allows AI agents to work for us, without compromising trust.