Agent-to-Agent Communication: A2A, MCP, and Inter-Agent Protocols (2026)
A single AI agent is useful. Multiple agents that can discover each other, delegate tasks, and share results are transformative. But agents built with different frameworks (OpenAI, LangChain, Claude Code) canβt talk to each other by default. They need a shared protocol.
In 2026, three protocols dominate agent-to-agent communication: Googleβs A2A (Agent-to-Agent), Anthropicβs MCP (Model Context Protocol), and emerging patterns like shared workspaces. Hereβs when to use each.
The protocol stack
Think of it as layers:
βββββββββββββββββββββββββββββββββββ
β A2A (Agent-to-Agent) β Agents talk to agents
βββββββββββββββββββββββββββββββββββ€
β MCP (Model Context Protocol) β Agents talk to tools/resources
βββββββββββββββββββββββββββββββββββ€
β HTTP/WebSocket/gRPC β Transport layer
βββββββββββββββββββββββββββββββββββ
MCP connects agents to tools and data sources. Itβs the βhow does an agent use a databaseβ protocol.
A2A connects agents to other agents. Itβs the βhow does a coding agent delegate to a testing agentβ protocol.
Theyβre complementary, not competing. Most production systems need both.
Google A2A protocol
Launched by Google in April 2025 and donated to the Linux Foundation, A2A is the open standard for inter-agent communication. It launched with 50+ enterprise partners including Salesforce, SAP, and ServiceNow.
How A2A works
- Discovery: Agents publish an βAgent Cardβ describing their capabilities
- Task delegation: One agent sends a task to another
- Status updates: The receiving agent streams progress
- Result delivery: The completed result is returned
// Agent Card (published at /.well-known/agent.json)
{
"name": "Security Reviewer",
"description": "Reviews code for security vulnerabilities",
"capabilities": ["code-review", "vulnerability-scan", "dependency-audit"],
"endpoint": "https://security-agent.example.com/a2a",
"authentication": {
"type": "oauth2",
"token_url": "https://auth.example.com/token"
}
}
# Sending a task to another agent via A2A
import httpx
async def delegate_to_agent(agent_url: str, task: dict):
async with httpx.AsyncClient() as client:
# Send task
response = await client.post(f"{agent_url}/tasks", json={
"task": task["description"],
"context": task["context"],
"callback_url": "https://my-agent.example.com/results",
})
task_id = response.json()["task_id"]
# Poll for results (or use webhook callback)
while True:
status = await client.get(f"{agent_url}/tasks/{task_id}")
if status.json()["state"] in ("completed", "failed"):
return status.json()
await asyncio.sleep(5)
When to use A2A
- Agents from different vendors need to collaborate
- Youβre building a marketplace of specialized agents
- Enterprise environments with agents from multiple teams
- Cross-organization agent delegation
MCP for agent-to-tool communication
MCP is Anthropicβs protocol for connecting agents to tools and data sources. While itβs primarily agent-to-tool, it can also enable indirect agent-to-agent communication through shared resources.
# MCP server exposing tools to any agent
from mcp import Server
server = Server("code-tools")
@server.tool("search_codebase")
async def search(query: str, path: str = "."):
"""Search the codebase for a pattern."""
result = subprocess.run(["rg", query, path], capture_output=True, text=True)
return result.stdout
@server.tool("run_tests")
async def run_tests(test_path: str):
"""Run tests and return results."""
result = subprocess.run(["pytest", test_path, "-v"], capture_output=True, text=True)
return result.stdout
Any agent that speaks MCP can use these tools β Claude Code, Gemini CLI, or custom agents built with the OpenAI Agents SDK.
For MCP security considerations, see our MCP security checklist.
Pattern: Shared workspace
The simplest form of agent-to-agent communication: a shared filesystem or database.
Agent A (Code Writer) Agent B (Code Reviewer)
β β
βΌ βΌ
βββββββββββββββββββββββββββββββββββ
β Shared Git Repository β
β /workspace/src/ β
β /workspace/reviews/ β
β /workspace/.agent-messages/ β
βββββββββββββββββββββββββββββββββββ
# Agent A writes code and signals Agent B
async def write_and_signal(code, filepath):
# Write the code
with open(f"/workspace/src/{filepath}", "w") as f:
f.write(code)
# Signal the reviewer
with open(f"/workspace/.agent-messages/review-request.json", "w") as f:
json.dump({
"file": filepath,
"timestamp": datetime.utcnow().isoformat(),
"author": "code-writer-agent",
"request": "Please review for security issues",
}, f)
# Agent B watches for review requests
async def watch_for_reviews():
while True:
requests = glob("/workspace/.agent-messages/review-request*.json")
for req_file in requests:
request = json.loads(open(req_file).read())
code = open(f"/workspace/src/{request['file']}").read()
review = await review_code(code)
# Write review result
with open(f"/workspace/reviews/{request['file']}.review.md", "w") as f:
f.write(review)
os.remove(req_file) # Acknowledge
await asyncio.sleep(10)
This is how the agents in our AI Startup Race communicate β through a shared filesystem on the VPS. Itβs simple, debuggable, and doesnβt require any protocol implementation.
Protocol comparison
| Feature | A2A | MCP | Shared workspace |
|---|---|---|---|
| Agent discovery | β Agent Cards | β Manual config | β Hardcoded |
| Cross-vendor | β Open standard | β Open standard | β Custom |
| Streaming | β SSE/WebSocket | β Streaming | β Polling |
| Authentication | β OAuth2 | β Token-based | β Filesystem perms |
| Complexity | High | Medium | Low |
| Maturity | Growing (Linux Foundation) | Mature (wide adoption) | Battle-tested |
| Best for | Agent-to-agent | Agent-to-tool | Same-system agents |
Practical architecture: combining protocols
A real production system uses all three:
ββββββββββββββββ A2A ββββββββββββββββ
β Coding Agent βββββββββββββββΊβ Review Agent β
β (Claude Code)β β (Custom) β
ββββββββ¬ββββββββ ββββββββ¬βββββββββ
β MCP β MCP
βΌ βΌ
ββββββββββββββββ ββββββββββββββββ
β Git Server β β Slack API β
β (MCP Server) β β (MCP Server) β
ββββββββββββββββ ββββββββββββββββ
β β
ββββββββββββ¬βββββββββββββββββββ
βΌ
ββββββββββββββββ
β Shared Repo β
β (Workspace) β
ββββββββββββββββ
- A2A for the coding agent to delegate reviews to the review agent
- MCP for both agents to access Git and Slack
- Shared workspace for the actual code files both agents work on
Getting started
If youβre building multi-agent systems today:
- Start with shared workspaces β simplest, works immediately
- Add MCP for tool access β set up MCP servers for your databases, APIs, and file systems
- Add A2A when you need cross-vendor agent collaboration β most teams donβt need this yet
Donβt over-engineer. Most multi-agent systems work fine with shared workspaces and MCP. A2A becomes necessary when youβre integrating agents from different teams or vendors.
Related: What is MCP? Β· MCP Security Checklist Β· How to Build Multi-Agent Systems Β· Agent Orchestration Patterns Β· Gemini CLI Subagents Β· OpenAI Agents SDK Guide Β· MCP vs Function Calling