Skip to main contentArrow Right

Table of Contents

It was nice as humans to be the primary purveyors of the Internet for a while, because we’re definitely not alone on the web in 2025. With the swirling cloud of AI agents, APIs, bots, and service accounts only growing in size, it’s no surprise that non-human identities outnumber human identities 20 to 1.

These machine identities exist with good reason–mostly in service of improving our online lives. If you need to buy shoes, an AI shopping assistant can browse a variety of sites, log in to your ecommerce account, and buy you the perfect pair after accounting for all your preferences. If you’re a sales professional, a B2B AI assistant can message prospects on LinkedIn, set up calendar invites and meetings on your behalf, and summarize the conversation in your CRM.

Well, things should work this way. But reality is more complex, as shown by the fact that less than two-fifths of AI projects successfully transition to production.

A lot of this complexity stems from identity.  Whether you’re a developer working to make your app / website “agent-ready” or building an AI agent yourself, you need to work through this complexity to share what you’ve built with the world in a secure, scalable manner.

Agentic identity drivers

AI developers must think thoughtfully about agentic identity for a variety of reasons, including the nature of how an AI agent operates, the scale at which they operate, and the potential downsides of giving agents too much agency.

Agentic identity drivers
Fig: Agentic identity drivers

Security

The release of every new LLM or AI model over the past few years has been accompanied by crowdsourced, livestreamed jailbreaking attempts as people try to find gaps and loopholes to outreason the model. Putting the entertainment and educational value of these jailbreaks to one side, this points to a fundamental new reality: agentic interfaces are a new threat vector for cybercriminals to exploit.

OWASP released their inaugural Top 10 Threats for GenAI report earlier this year, providing tons of excellent insight into the different ways AI models, agents, and workflows can be compromised by humans and even other AI agents. It’s noteworthy that authentication and authorization mitigations are highlighted in 5 out of these 10 threats.

Threat

Mitigation

Memory Poisoning

Implement memory content validation, session isolation, robust authentication mechanisms for memory access, anomaly detection systems, and regular memory sanitization routines.

Tool Misuse

Enforce strict tool access verification, monitor tool usage patterns, validate agent instructions, and set clear operational boundaries to detect and prevent misuse.

Privilege Compromise

Implement granular permission controls, dynamic access validation, robust monitoring of role changes, and thorough auditing of elevated privilege operations.

Identity Spoofing & Impersonation

Develop comprehensive identity validation frameworks, enforce trust boundaries, and deploy continuous monitoring to detect impersonation attempts.

Human Attacks on Multi-Agent Systems

Restrict agent delegation mechanisms, enforce inter-agent authentication, and deploy behavioral monitoring to detect manipulation attempts.

Identity management is critical for GenAI security to ensure:

  • Secure authentication across agents, apps, APIs, and users.

  • Scoped and time-bound access for AI agents on users’ behalf.

  • Standardization in how these identities communicate with each other.

Speaking of standardization…

Interoperability 

If AI systems are to be suffused into every aspect of our future lives, innovation cannot happen behind closed doors where only a few people have the key. The industry needs to converge on a set of standardized protocols to ensure that AI agents can securely and seamlessly access local and remote databases, third-party tools, and the public Internet to reach their full potential. And the early signs here are heartening.

Take MCP for example. After being introduced by Anthropic in November 2024, let’s check up on its adoption barely five months later:

We believe in the potential of MCP to act as a connective fabric that lets LLMs easily access third-party tools and systems–and that’s before taking Google’s newer Agent2Agent (A2A) protocol into account, which holds great promise in standardizing how agents communicate with each other.

While these protocols provide important and necessary guidelines and scaffolding to set up AI systems, the actual authentication and access control mechanisms are old and familiar: the tried and tested OAuth standard. OAuth is a perfect fit for agentic identity flows because:

  • It supports machine-to-machine communication through the client credentials flow

  • It supports scopes and tokens, which are critical for secure, authorized access

  • Flows like CIBA provide the building blocks of human-in-the-loop approval processes within AI agent journeys

The challenge? OAuth is a beast of a standard for developers to navigate even in 2025. Case in point: while MCP recommends following the OAuth 2.1 standard with PKCE, implementing those flows are much easier said than done. And that’s without taking into account the several other open questions when it comes to MCP and authorization.

AI developers need to keep their eyes on how these protocols evolve, but every AI developer becoming an auth expert in order to take their AI project to production is not sustainable.

Agent experience

Agents are a class of digital citizens that app and API developers have not had to cater to before, their traits drawing from both the robotic and the human.

  • Agents have agency, which makes them much more than APIs. Once given a goal, they have the capability to perform a cascading series of tasks including accessing accounts, performing payments, and communicating with users or other agents.

  • Agents generally won’t interact with your app like humans will, preferring APIs and token handshakes to point-and-click UI (although there are exceptions). 

  • Agents are linked to humans, but agents use APIs much more exhaustively than humans do. Giving an AI agent the same access humans have allows it to potentially abuse systems much more than humans would. Access should be clearly scoped and short-lived, enough for an agent to complete its task but not so long that it leaves backdoors for agent compromise down the line.

Humans AI navigate the web differently-min
Fig: Humans and AI navigate the web differently

As Netlify Co-Founder and CEO Mathias Biilmann noted: “We need to start focusing on AX or agent experience — the holistic experience AI agents will have as the user of a product or platform.” This means:

  • Clear and well-defined APIs

  • Stable endpoints

  • Robust documentation and SDKs

  • Scoped, time-bound access 

  • End user consent mechanisms 

  • Secure token exchange, management, storage, and revocation

Your list may vary, but every developer should have a list by now. 

Developer experience

As AI agents continue making our lives easier, the lives of AI developers are poised to get harder if we’re not careful. We’ve already covered how being responsible for OAuth implementation and maintenance is a tall order, but that’s just scratching the surface of the tooling travails that await.

Say you’re an AI developer building a suite of three AI agents for different use cases and have identified a core set of five external tools (e.g. Google Calendar, Zoom, HubSpot, Zendesk, GitHub) these AI agents will be able to interact with to complete their tasks. You now run into the NxM problem: each of these AI agents require individual integrations with each of the five tools, giving you a total of 15 integrations (at least) to set up and maintain.

NxM problem-min
Fig: The chaotic NxM problem

MCP is an important part of avoiding the NxM problem but adoption is not going to be instantaneous. In the near future, developers responsible for agent tooling should expect to face some tools that support MCP but not OAuth, some tools that support OAuth but not MCP, and several flavors in between.

Even if you’ve embraced the grind and hustle, would you rather spend your time building the AI agent you thought you’d build or wrangling endless tool integrations and managing endless provider tokens to keep the lights on?

The agentic identity paradigm

What should identity infrastructure for the agentic age look like? In our view, it should be:

Fine-grained

Supporting agentic flows needs a fundamental rethink in how access control for APIs is handled. For the longest time, API access was limited to service accounts and tied to coarse-grained authorization based solely on roles. But AI agents can request API access on behalf of individual users and sometimes deal with multiple APIs within the same function call, greatly increasing the complexity and nuance involved.

With AI agents, API access needs to be tied to both roles and scopes, allowing for fine-grained API access while also taking into account role-based governance. Using scopes and roles in conjunction enables enforcing OAuth-based granular permissions while also accounting for business logic and hierarchies.

Interoperable

Agentic identity flows need to embrace foundational protocols and technology building blocks that are agent-friendly or designed with agents in mind, like OAuth, MCP, JWTs, and OpenAPI specs. Your app’s identity infrastructure should be extensible enough to support any AI agent or API and secure enough to only provide them with scoped, authorized access. 

User, developer & AI agent centric

Happy developers make happy AI agents make happy users. Agentic identity systems should be carefully crafted to provide all stakeholders a great experience:

  • Developers should get “layers of abstraction” that simplify authentication, authorization, and tooling complexity–on the app and agent side–to help them focus on core AI initiatives.

  • AI agents should get seamless, clearly scoped access to data and actions on behalf of users.

  • Users should get visibility into the data and actions being requested on their behalf, have the ability to provide consent, and also be able to offer additional auth checks before sensitive actions.

Welcome to AI Launch Week

If you’ve read this far, thank you and welcome to AI Launch Week! We have a number of exciting product announcements, demo experiences, integrations, and developer updates planned for the week.

  • Day 1 is today, a chance to reflect on the role identity will play in secure, scalable agentic adoption.

  • On Day 2, we have something for any organization looking to make their apps and APIs “agent-ready”.

  • On Day 3, we have something for organizations building AI agents whose developers are buried under a mountain of tooling debug sessions and token management tasks.

  • On Day 4, we have something for organizations looking to secure their MCP servers as well as extend their functionality.

  • On Day 5, we plan to highlight the endless ways you can integrate Descope into your existing AI systems.

Keep up with daily updates on AI Launch Week here, or on LinkedIn and X. See you tomorrow!