Robots AtlasRobots Atlas

Model Context Protocol

MCP standardizes how AI applications connect to external data sources and tools, replacing per-pair custom integrations with a single open client-host-server protocol built on JSON-RPC 2.0.

Category
Abstraction level
Operation level
01

Host

Container and coordinator: creates client instances, manages their lifecycle, enforces security and user consent policies, and coordinates integration with the language model.

Modular

The host process acts as the container and coordinator. It creates and manages multiple client instances, controls client connection permissions and lifecycle, enforces security policies and user consent requirements, and coordinates LLM integration and sampling.

02

Client

Maintains an isolated, stateful 1:1 connection with a single server; handles protocol negotiation and capability exchange; relays messages bidirectionally.

Each client is created by the host and maintains a stateful, isolated 1:1 session with a specific server. It handles protocol negotiation, capability exchange, bidirectional message routing, and subscription management.

03

Server

Provides specialized context and capabilities through three primitive types: Resources, Prompts, and Tools; can run as a local process or remote service.

Modular

Servers provide specialized context and capabilities via three primitives: Resources (context data), Prompts (templated workflows), and Tools (executable functions). Servers operate independently with focused responsibilities and can be local processes or remote services.

Local (stdio) serverRemote (HTTP/SSE) server
04

Primitives โ€” server-side

Three capability types exposed by servers: Resources (contextual data), Prompts (instruction templates), Tools (executable functions).

Server-side primitives define the capabilities a server can offer: Resources provide structured data for the model's context window; Prompts are templated messages and workflow instructions; Tools are executable functions the model can invoke to retrieve information or perform actions.

05

Primitives โ€” client-side

Two capability types exposed by clients: Roots (filesystem access) and Sampling (server-initiated LLM completion requests).

Client-side primitives define capabilities the client exposes to servers: Roots give servers access to filesystem or URI boundaries on the client side; Sampling allows servers to request LLM completions, enabling agentic and recursive behaviors.

06

JSON-RPC 2.0 transport

Communication layer of the protocol; defines the message format and the request-response exchange mechanism between client and server.

Modular

MCP uses JSON-RPC 2.0 as its base message format. The transport layer is pluggable: initial versions used stdio streams; later versions added HTTP with Server-Sent Events (SSE). The protocol is stateful within a session.

07

Capability negotiation

Capability negotiation mechanism during session initialization; client and server declare their capabilities before exchanging operational messages.

During session initialization, clients and servers explicitly declare their supported features. This capability-based negotiation determines which protocol features and primitives are available for the session, ensuring forward and backward compatibility.

Parallelism

Fully parallel

Each client maintains an independent session with a single server; a host can run multiple parallel client-server sessions simultaneously with no interdependencies between them.

Transport Type

Standard
  • stdioLocal subprocess, communication via standard I/O
  • HTTP+SSERemote server, communication via HTTP with Server-Sent Events

The communication transport between client and server. Initial MCP supported stdio (local subprocess); later versions added HTTP with SSE for remote servers.

Active Primitives

Standard
  • Tools onlyServer exposes only executable tools, no resources or prompts.
  • Resources + ToolsCommon configuration for data-access and action servers

Which server-side (Resources, Prompts, Tools) and client-side (Roots, Sampling, Elicitation) primitives are enabled. Capability negotiation at session start determines which are active.

Sampling (server โ†’ LLM)

Standard
  • trueEnables agentic, recursive LLM behaviors initiated by the server
  • falseDefault conservative configuration; servers cannot trigger LLM calls.

Whether servers are permitted to request LLM completions from the client side. Requires explicit user consent and client declaration of sampling capability.

Common pitfalls

Prompt injection vulnerability via untrusted servers
CRITICAL

Tool descriptions and server-provided annotations are untrusted by default. A malicious or compromised MCP server can attempt to inject instructions into the model's context via resource content or tool descriptions (prompt injection). Hosts must treat all server-provided content as untrusted.

Hosts must not automatically trust tool descriptions or resource content from servers. Implement sandboxing, allowlisting trusted servers, and require explicit user confirmation before tool invocation.

Absent server identity verification in early protocol versions
HIGH

Early MCP versions (pre-2025-11-25) lacked a standardized mechanism for server identity verification, making it possible for a malicious process to impersonate a trusted server.

Use spec version 2025-11-25 or later, which introduced server identity. Implement additional authentication at the transport layer (e.g., OAuth 2.0 for HTTP transports).

Excessive context window consumption by tool declarations
MEDIUM

When many MCP servers with many tools are connected simultaneously, the tool declarations inserted into the model's context window can consume a significant portion of the available token budget, reducing the space available for actual task context.

Limit the number of simultaneously connected servers and tools. Use lazy-loading patterns or selective tool exposure based on task context.

Sampling without explicit user consent
HIGH

If a host implements the Sampling primitive without requiring explicit user approval for each sampling request, servers can trigger LLM completions without user awareness, enabling potentially uncontrolled agentic behaviors.

Require explicit user consent for every sampling request. The MCP specification mandates that users must explicitly approve sampling and control what prompts are sent.

2024

First public release of MCP

breakthrough

Anthropic open-sourced Model Context Protocol on November 25, 2024 (spec version 2024-11-05) with SDKs for Python and TypeScript and reference server implementations for Google Drive, Slack, GitHub, Postgres, Git, and Puppeteer.

2025

Adoption by OpenAI and Google DeepMind

breakthrough

In March 2025, OpenAI officially adopted MCP across its Agents SDK, Responses API, and ChatGPT desktop app. In April 2025, Google DeepMind confirmed MCP support in Gemini models. Over 1,000 community-built MCP servers were available by early 2025.

2025

Specification update 2025-11-25

The specification received major updates including asynchronous operations, statelessness support, server identity, Elicitation primitive (server-initiated user queries), and an official community-driven server registry.

2025

Protocol transferred to Agentic AI Foundation

breakthrough

In December 2025, Anthropic donated MCP governance to the Agentic AI Foundation (AAIF), a directed fund under the Linux Foundation, co-founded by Anthropic, Block, and OpenAI.

Hardware agnosticPRIMARY

MCP is a communication protocol and interface specification; it has no requirements or preferences regarding specific hardware. It runs on any environment capable of executing a process that handles JSON-RPC 2.0.