Skip to main contentArrow Right

Table of Contents

The Model Context Protocol (MCP) is an emerging open standard that allows AI systems (like LLMs) to interact with external tools and data in a standardized way​. Early MCP deployments ran the MCP server and client together in a local and controlled environment, meaning there was no need for complex authorization.

As organizations begin to deploy MCP servers remotely, the need to better secure sensitive data and operations has become urgent. To answer this need, the MCP specification introduced a new authorization component based on OAuth 2.1 with the goal of leveraging battle-tested standards to protect MCP endpoints. 

This guide covers the core concepts of MCP authorization and explains how those elements are sometimes misapplied or misunderstood. 

We’ll cover:

  • Why OAuth-style authorization matters for MCP

  • Requirements MCP places on implementers 

  • Current challenges being discussed in the community 

  • Insights into open questions about MCP

The evolution of MCP and authentication

Originally, MCP clients could talk to MCP servers directly through stdio (often on the same machine), so authentication was minimal. As the MCP ecosystem evolved, the need arose to call MCP servers across the network (e.g. Streamable HTTP), such as by third-party applications on behalf of users. 

For example, a user might run an MCP client application that needs to access their data on a remote MCP server in a data center. The user (resource owner) must grant the client access to the server, ideally without handing out passwords or API keys. This is exactly the problem OAuth solves—delegated authorization.

Because modern organizations have existing identity providers (IdPs) or OAuth2 authorization servers, the maintainers of MCP chose to piggyback on OAuth 2.1 rather than creating a new scheme. This means an MCP server can trust tokens issued via an OAuth flow, and MCP clients can obtain those tokens by redirecting users to an authorization server. 

Familiar OAuth roles can now enter the MCP picture: the MCP server acts as the resource server, the OAuth IdP is the authorization server, and the MCP client is the OAuth client requesting access on behalf of the user. However, as we’ll see, the current MCP specification leaves the final implementation of the concepts up to interpretation. 

Fig: OAuth 2.0 roles
Fig: The OAuth roles

Need a refresher on OAuth? Read our beginner-friendly guide.

MCP authorization requirements at a glance

At the time of writing, MCP Authorization Specification establishes a framework based on OAuth 2.1 to secure interactions between MCP clients and servers:

Requirement

Recommendation status

Description

OAuth 2.1

MUST

Implement OAuth 2.1; PKCE mandatory for authorization code flows with public clients

Dynamic Client Registration (DCR)

SHOULD

Support RFC 7591 to allow clients to programmatically register with the authorization server

Authorization Server Metadata (ASM)

SHOULD (servers); MUST (clients)

Implement RFC 8414 discovery to expose auth server endpoints and capabilities

Default endpoints

MUST (if no ASM)

Provide OAuth endpoints at /authorize, /token, and /register on the MCP server

Why these components matter

OAuth 2.1: Provides a standardized security framework with mandatory PKCE to protect against authorization code interception attacks. Since most MCP clients are public (like CLI tools or apps), PKCE is always required.

Dynamic Client Registration: DCR allows MCP clients to obtain credentials (a client ID and possibly secrets) at runtime rather than requiring manual pre-registration. While MCP’s maintainers encourage this, not all IdPs support it, and it often requires initial access tokens or admin privileges. Thus, many enterprises may bypass this by pre-registering trusted clients.

Authorization Server Metadata: ASM allows MCP clients to discover authentication endpoints automatically. Without this discovery mechanism, developers must hardcode endpoint locations, complicating interoperability. When ASM is implemented an MCP client can query a well-known URL by default: <<MCP server base URL>>/.well-known/oauth-authorization-server

This exposes a JSON document containing all necessary OAuth endpoints and capabilities:

{
  "issuer": "https://auth.example.com",
  "authorization_endpoint": "https://auth.example.com/authorize",
  "token_endpoint": "https://auth.example.com/token",
  "registration_endpoint": "https://auth.example.com/register",
  "scopes_supported": ["read", "write", "admin"],
  "response_types_supported": ["code"],
  "grant_types_supported": ["authorization_code", "refresh_token"],
  "token_endpoint_auth_methods_supported": ["none"],
  "code_challenge_methods_supported": ["S256"]
}

Default endpoints: If an MCP server does not support this metadata discovery, the specificationification requires it to have default OAuth endpoints at fixed paths on the MCP server: /authorize, /token, and /register. Either you have a .well-known/oauth-authorization-server JSON, or the client will assume the auth endpoints are at those paths.

Implementation challenges and community discussions

The MCP specification’s approach to authorization has sparked active community debate. The fundamental issue: it treats the MCP server as both a resource server and an authorization server. In classical OAuth, resource servers don’t issue tokens; they verify them. The MCP specification suggests that the MCP server might host authorization endpoints and mint tokens, creating significant implementation complexity, especially in enterprise environments.

Key technical challenges

  • Discovery mechanism conflicts: Should MCP servers host discovery documents (.well-known/oauth-authorization-server) or rely on WWW-Authenticate headers with resource_metadata pointers?

  • SDK integration issues: SDKs and reference implementations often assume MCP servers are also the authorization server, making third-party integration trickier.

  • Token lifecycle management: Spec mandates complex token mapping and tracking when using third-party authorization, which significantly increases the implementation burden.

  • Connection protocol limitations: SSE (Server-Side Events) endpoints lack clear conventions for handling insufficient scopes or token expiry during active connections.

These challenges have led to proposals for decoupling authorization concerns and making token mapping optional, which could simplify deployment.

Best practices for MCP authorization implementation

With so much developer feedback, several MCP implementation best practices are emerging. These both aim to simplify deployment while accounting for potential vulnerabilities in the current specification.

Separate authorization servers from resource servers

Treat the MCP server as a Resource Server only, and use an external, dedicated authorization server for OAuth flows. This aligns with enterprise architectures where security is centralized. The MCP server’s job is to validate tokens and enforce RBAC/permissions internally, but not to manage user logins or token issuance. Keeping MCP servers stateless (or at least not storing OAuth state) improves scalability and lowers maintenance overhead.

Point MCP clients to an external authorization server

If you have an existing IdP (Identity Provider), configure your MCP server to advertise that issuer’s metadata rather than hosting its own. This can be done by populating the .well-known/oauth-authorization-server response or responding with a WWW-Authenticate: Bearer … header that contains the IdP’s discovery/documentation URL. This helps the MCP client to know exactly where to perform the OAuth exchange securely.

Avoid token mapping

Token mapping essentially creates an OAuth server inside your resource server. This means implementing secure storage, request validation, and expiration/revocation handling. While the specification requires MCP-issued tokens, you can bypass this by having the MCP server accept and validate external tokens directly (such as by using JWKs and checking scopes). 

Host ASM at the MCP server

When using an external authorization server, we recommend hosting a .well-known metadata file that redirects to the external one. Keep an eye out on the specification for changes as there is much discussion of using the OAuth Protected Resource Metadata draft which allows the MCP server to return a resource_metadata link via WWW-Authenticate or hosting metadata on a different path

Implement scopes at the function level

MCP tool calls may not map one-to-one with APIs, so scopes should be validated at the tool or function level. On top of scope validation for all routes at the middleware level, the request object can be parsed to determine the tool call or resource involved, checking for specific scopes (e.g. a view scope for viewing a document).

Open questions and recommendations

The following open questions are drawn from outstanding issues the MCP spec has yet to fully resolve. While we can only speculate as to the eventual shape the protocol’s authorization requirements will take, understanding its current state can provide much-needed insight.

Scope discovery

How should an MCP client know what scopes to request during the OAuth flow? The specification doesn’t define scopes—implementors must decide. A fixed scope like access_mcp may suffice in some cases, and the ASM can list scopes_supported, though without indicating which scopes are needed for a specific action. Future revisions will ideally include scope recommendations or hints in WWW-Authenticate errors.

DCR security

A key future consideration is how secure and standardized Dynamic Client Registration should be. The current OAuth specification allows for flexibility, but this can open the door to client impersonation or unverified registrations. This becomes particularly important for federated environments or public tools, where the authorization server needs to assert trust before issuing credentials.

MCP implementations may benefit from:

  • A verification endpoint or flow to validate ownership (e.g., of redirect URIs or domains).

  • Requiring proof of control mechanisms like DNS verification, signed JWT assertions, or file-based challenges.

  • Protective policies that prevent malicious clients from registering identities that mimic trusted apps.

Handling 403 over SSE connections

SSE streams complicate error handling. If a token lacks necessary scopes:

  • The server could filter out unauthorized events, or

  • terminate the stream with an error (e.g., 403/401).

Since SSE uses a persistent stream post-200 OK, there's no built-in way to signal errors mid-connection. A custom event could be used, but isn’t standard. For now, scopes should be enforced on initial connection or request at the HTTP level rather than SSE.

Universal JWT validation via JWKs

Ideally, an MCP server accepts JWTs directly from a trusted IdP, verifies them via JWKs, and authorizes based on included claims or scopes—without issuing its own token. This requires:

  • IdP-issued JWTs to include necessary claims.

  • MCP server to validate signatures via jwks_uri.

This stateless approach avoids the need for token mapping and call-backs to the IdP. However, the specification currently leans toward requiring MCP-issued tokens. 

Exploring MCP authorization approaches

As MCP matures from a local developer tool into a remote-first protocol for secure, cross-system AI integration, its authorization model must evolve accordingly. While MCP Auth discussions are always happening, one RFC to watch right now is #284, which proposes moving integrations with third-party IdPs to be default, IdP discovery to WWW-Authenticate instead of a hosted metadata document, and more. In general, the shift toward OAuth 2.1 represents a necessary step in aligning MCP with modern, enterprise-grade access control patterns—but it’s not without friction. 

The good news is that the broader OAuth ecosystem provides a wealth of battle-tested solutions. By strictly treating MCP servers as resource servers, reusing external identity providers, and embracing modern standards, teams can reduce implementation complexity and avoid reinventing the wheel.

For more developer deep-dives, subscribe to our blog, follow us on LinkedIn, and chat with us in our Slack community.