Two weeks ago, 1,200 developers packed into the New York Marriott Marquis for the first MCP Dev Summit. The week after, the A2A project shipped v1.0. If you’re building anything with AI agents, you’ve probably seen both acronyms flying around and wondered: are these competing standards, or do I need both?
The answer — annoyingly but accurately — is that they solve completely different problems. MCP handles how an agent talks to tools. A2A handles how agents talk to each other. Confusing them is like confusing HTTP with email. Both move data, but nobody asks “should I use HTTP or SMTP?”
I’ve spent the past few months building agent pipelines that use both protocols, and the distinction becomes obvious the moment you try to wire up a real system. Here’s the practical breakdown.
The Two-Protocol Problem
Think about what happens when you ask an AI agent to do something nontrivial — say, “find the cheapest flight to Tokyo next month and book it.”
That agent needs two very different kinds of communication. First, it needs to reach out to flight APIs, read your calendar, check your budget spreadsheet, and process a payment. That’s tool access — the agent interacting with external services and data sources. This is MCP territory.
But what if no single agent knows how to do all of that? Maybe you have a travel research agent that’s great at finding flights, a calendar agent that understands your schedule, and a booking agent that handles payments. Now these agents need to talk to each other, negotiate, and pass work around. That’s A2A territory.
Most real agent systems need both layers. The confusion comes from the fact that both protocols landed at roughly the same time, both are now under the same foundation, and the tech press keeps framing them as competitors.
MCP: How Agents Connect to the World
The Model Context Protocol started at Anthropic and was open-sourced in late 2024. The premise is straightforward: every AI tool integration was a bespoke snowflake. Want Claude to read your GitHub repos? Custom integration. Want it to query your database? Another custom integration. Every new tool meant another adapter, another auth flow, another set of assumptions about how data should flow.
MCP standardizes this into a client-server architecture with a clean JSON-RPC 2.0 protocol underneath.
The Architecture in Practice
An MCP setup has three layers:
The host is whatever application the user interacts with — Claude Desktop, Cursor, VS Code with Copilot, your custom agent app. The host manages the user experience and coordinates everything.
MCP clients live inside the host and maintain connections to MCP servers. Each client talks to one server, and the host can run multiple clients simultaneously.
MCP servers are lightweight processes that expose capabilities through three primitives:
- Tools — executable functions the AI can call. Think “search_files,” “run_query,” “send_email.” The model decides when to invoke them through function calling.
- Resources — data the AI can read for context. File contents, database schemas, API documentation. Read-only, no side effects.
- Prompts — reusable templates that structure interactions. Less discussed, but useful for building consistent workflows.
When a host starts up, it launches its configured MCP servers and performs a capability handshake — “what can you do?” The server responds with its list of tools, resources, and prompts. When the AI model decides it needs to use a tool, the client routes the request to the right server, the server executes it, and the result flows back.
Transport: Local and Remote
MCP originally only supported stdio — standard input/output between local processes. Your MCP server ran on the same machine as your client. Fast, zero network overhead, but obviously limited.
Now it supports HTTP with Server-Sent Events for remote communication. This is what made MCP practical for production. You can run MCP servers as hosted services, share them across teams, and build an ecosystem of reusable integrations.
The summit’s big technical push was around Streamable HTTP transport, which replaces the older SSE-only approach. Servers can now return simple HTTP responses for quick operations or upgrade to streaming for longer ones. There was also significant discussion around a gRPC transport for high-throughput enterprise deployments.
Where MCP Is Right Now
The adoption numbers are hard to argue with: over 97 million monthly SDK downloads across Python and TypeScript as of February 2026. Every major AI provider has adopted it — Anthropic, OpenAI, Google, Microsoft, Amazon. The MCP Apps extension, which lets servers provide interactive UIs to clients, launched in January and is already supported in Claude, ChatGPT, VS Code with GitHub Copilot, and several other tools.
The 2026 roadmap focuses on authentication improvements, observability integration, and horizontal HTTP scaling. The experimental “tasks” primitive — which gives servers a way to handle long-running operations by returning a durable handle while work continues in the background — got a lot of attention at the summit. So did “triggers,” which are essentially webhooks for MCP, letting servers proactively notify clients when new data is available.
A2A: How Agents Find and Work With Each Other
Google Cloud launched A2A in April 2025 with over 50 enterprise partners. It reached v1.0 in early 2026 with some serious additions: gRPC support, signed Agent Cards, and multi-tenancy. While MCP is about giving one agent access to tools, A2A is about enabling multiple agents — potentially built by different vendors on different platforms — to discover each other and collaborate on tasks.
The core problem A2A solves: your company runs a Salesforce agent for CRM, a ServiceNow agent for IT ops, and a custom agent for internal analytics. How do they work together on a request that spans all three systems? You can’t just hardcode the integrations — the whole point is that agents are opaque. You don’t know (or care) how the other agent works internally.
Agent Cards: The Discovery Mechanism
Every A2A-compatible agent publishes an Agent Card at /.well-known/agent.json. It’s a JSON document that describes who the agent is, what it can do, what authentication it requires, and where to reach it. Think of it like DNS for agents — a standardized way to discover and understand capabilities before you start talking.
An Agent Card includes:
- Identity — name, description, provider information
- Skills — structured declarations of what the agent can do (“I can look up order status,” “I can generate financial reports”)
- Endpoint — where to send requests
- Authentication — what credentials or tokens are required
- Supported modes — whether the agent handles synchronous requests, streaming, push notifications
With v1.0, Agent Cards can be cryptographically signed, which matters a lot in enterprise environments where you need to verify that the agent you’re talking to is actually who it claims to be.
Task Lifecycle: The Core Workflow
A2A’s fundamental unit of work is the Task. When a client agent wants something done by a remote agent, it sends a message/send request. The remote agent creates a Task with a unique ID and starts processing.
Tasks move through a defined lifecycle: submitted → working → completed (or failed). There’s also an input-required state for when the remote agent needs more information to proceed — which is where things get interesting, because it means A2A natively supports multi-turn conversations between agents.
The protocol separates progress from outputs. TaskStatusUpdateEvent messages communicate what’s happening (“Pulling flight data…”, “Comparing 47 options…”), while TaskArtifactUpdateEvent messages deliver actual results. This distinction matters for building UIs — you can show the user real-time progress without mixing it in with final outputs.
For long-running tasks — and some agent workflows genuinely take hours — A2A supports asynchronous push notifications via webhooks. The client supplies a callback URL, and the server pushes updates as the task progresses. The client can disconnect and reconnect without losing state.
The Enterprise Play
A2A’s design choices make more sense when you think about enterprise deployments. Agents are opaque — you don’t need to know the internal implementation. They might run on different clouds, different frameworks, different LLMs. A2A doesn’t care. It just defines the conversation protocol.
This is why the v1.0 release focused so heavily on security primitives. Signed Agent Cards, mutual TLS, multi-tenancy support — these are features that procurement teams and security reviewers ask about. Google built A2A for the kind of deployments where a Fortune 500 company has agents from six different vendors that need to cooperate.
Side-by-Side: The Differences That Matter
Enough background. Here’s what actually differs in practice:
| Dimension | MCP | A2A |
|---|---|---|
| Core relationship | Agent → Tool/Data | Agent → Agent |
| Communication pattern | Client-server, request-response | Peer-to-peer, task-based |
| Discovery | Configured at startup (server list) | Runtime discovery via Agent Cards |
| Protocol | JSON-RPC 2.0 | JSON-RPC 2.0 + gRPC |
| State model | Stateful sessions | Stateful tasks with lifecycle |
| Key primitives | Tools, Resources, Prompts | Agent Cards, Tasks, Artifacts |
| Opacity | Server internals visible to client | Agents are opaque to each other |
| Auth model | Per-server configuration | Per-agent, with signed cards |
| Streaming | SSE / Streamable HTTP | SSE + push notifications |
| Originator | Anthropic (2024) | Google (2025) |
| Governance | Agentic AI Foundation / Linux Foundation | Agentic AI Foundation / Linux Foundation |
The most important row in that table is “opacity.” MCP servers are transparent — the client knows what tools are available, what parameters they take, what resources exist. This is by design. The AI model needs to understand the tool to decide when and how to use it.
A2A agents are opaque. You know what skills an agent advertises, but you don’t know how it implements them. You send a task, you get results (or errors). This is also by design — it’s what enables multi-vendor, multi-platform agent ecosystems where nobody has to expose their internals.
How They Work Together (With a Real Example)
Here’s a concrete architecture that uses both protocols. Say you’re building a customer support system for an e-commerce company.
The front-line agent receives customer messages and figures out intent. It uses MCP to connect to your knowledge base (MCP Resource), your CRM (MCP Tool to look up customer data), and your ticketing system (MCP Tool to create/update tickets).
A customer writes: “I ordered a laptop last week but the tracking says it’s going to the wrong address.” The front-line agent looks up the order via MCP, confirms the shipping issue, and realizes it needs to interact with the logistics system — which is managed by a separate team running their own agent.
The logistics agent is an A2A-compatible service published with an Agent Card at logistics.internal/.well-known/agent.json. Its skills include “redirect_shipment” and “check_carrier_status.” The front-line agent discovers it, sends an A2A task to redirect the shipment, and receives status updates as the logistics agent processes the request.
Meanwhile, the logistics agent internally uses MCP to connect to FedEx’s API, the warehouse management system, and the address validation service. Its MCP connections are invisible to the front-line agent — that’s the opacity A2A provides.
The flow looks like:
Customer → Front-line Agent ├── MCP → Knowledge Base (resource) ├── MCP → CRM (tool: lookup_customer) ├── MCP → Ticketing (tool: update_ticket) └── A2A → Logistics Agent ├── MCP → FedEx API (tool: redirect) ├── MCP → Warehouse DB (resource) └── MCP → Address Validator (tool)MCP is the vertical connection — agent to tools and data. A2A is the horizontal connection — agent to agent. They’re not competing. They’re complementary layers in the same stack.
The Agentic AI Foundation: Why Governance Matters
Both protocols are now governed by the Agentic AI Foundation, formed in December 2025 under the Linux Foundation. Anthropic contributed MCP, Google contributed A2A, Block contributed goose (an open-source AI agent), and OpenAI contributed AGENTS.md.
As of March 2026, AAIF has 146 member organizations across three tiers. The Platinum members are a who’s-who of tech: AWS, Anthropic, Block, Bloomberg, Cloudflare, Google, Microsoft, OpenAI. Gold includes IBM, Salesforce, SAP, Shopify, Docker, JetBrains, Oracle, JPMorgan Chase. Silver has Zapier, Hugging Face, Uber, Pydantic, WorkOS, and dozens more.
Why does this matter? Because protocol adoption is a coordination game. USB beat FireWire not because it was technically superior but because more companies backed it. Having Anthropic and Google — the respective creators — sitting in the same foundation, governed by the same charter, with buy-in from basically every major cloud and enterprise vendor, dramatically reduces the risk of a format war.
It also means the protocols will evolve in a coordinated way. The summit had sessions specifically about MCP-A2A interop patterns, and there’s active work on defining how an A2A Agent Card can advertise MCP-compatible tool interfaces.
What About ACP?
You might have seen the Agent Communication Protocol mentioned alongside MCP and A2A. ACP, developed by IBM, focuses on agent-to-agent communication within a single trusted deployment — think agents on the same cluster or within the same organization’s infrastructure. It’s lighter-weight than A2A and doesn’t include the cross-organization trust primitives (signed cards, mutual TLS) that A2A provides.
ACP has its niche, but for most developers choosing between protocols right now, the MCP + A2A combination covers the vast majority of use cases. ACP becomes relevant if you’re doing high-frequency agent communication within a single trust boundary and need minimal overhead.
The Decision Guide
Here’s my practical take on when you need what:
You only need MCP if you’re building a single agent (or a tightly coupled agent system) that needs to interact with external tools and data. This covers a huge number of use cases — chatbots with tool access, coding assistants, data analysis agents, personal productivity agents. If your agents are all in your codebase and you control the orchestration, MCP alone is probably enough.
You need A2A when agents cross organizational or team boundaries. If you’re integrating with agents you didn’t build, or building agents that other teams will consume as services, A2A provides the discovery, negotiation, and trust mechanisms that make that work. Multi-vendor agent ecosystems, enterprise agent marketplaces, cross-company collaborations — this is A2A territory.
You need both when you’re building a multi-agent system where each agent needs its own tool access AND agents need to collaborate across boundaries. This is increasingly the default for any serious enterprise deployment. Your agent uses MCP to do its job, and A2A to coordinate with other agents.
If you’re just starting out, start with MCP. It’s more mature, has broader SDK support, and the ecosystem of pre-built MCP servers is enormous. You can always add A2A later when your architecture grows to the point where you need agent-to-agent communication.
Getting Your Hands Dirty
The fastest way to understand these protocols is to build something small with each.
For MCP, grab the TypeScript or Python SDK and build a server that exposes a tool — even something trivial like a weather lookup or file search. Connect it to Claude Desktop or any MCP-compatible client. The official docs at modelcontextprotocol.io walk you through this in about fifteen minutes. Once you’ve felt the handshake happen and seen your tool appear in the client’s tool list, the architecture clicks.
For A2A, the GitHub repo at github.com/a2aproject/A2A has sample implementations. Build a simple agent server, publish an Agent Card, and have a client agent discover and interact with it. The task lifecycle — submitted, working, completed — makes a lot more sense when you’ve watched the status events stream through in real time.
Then try combining them. Build an agent that uses MCP internally (connecting to a couple of tools) and exposes itself as an A2A service. Have another agent discover it and send it a task. That exercise alone will make the architectural distinction completely clear.
Both protocols are still young. MCP has a big head start in adoption and tooling, while A2A just hit 1.0 and production deployments are still mostly at large enterprises. My guess is that within a year, supporting both will feel as normal as supporting REST and webhooks — but there’s real uncertainty about how the interop story shakes out in practice. Worth watching, and worth getting your hands on both now so you aren’t scrambling later.