Google has introduced a new protocol called A2A, which stands for agent-to-agent protocol. It enables communication between two agentic applications or between two agents. The crazy part is that this protocol can connect AI agents from any framework, whether it’s Langchain, Crew AI, Google’s ADK, or even custom-built systems. A2A runs on a shared protocol built on HTTP.
This protocol is honestly wild, and the kind of applications and implementations we’re going to see from it are going to be mind-blowing. Just like MCP, it’s expected to gain a lot of traction. As more applications are developed with it, the momentum will only grow. The best part is that it doesn’t replace MCP; instead, it works alongside it, and both can be used together to build powerful systems.
Let’s jump into this article and see how it all works.
How A2A Works: An Official Demonstration
An official demo from Google demonstrates how the protocol works. It starts with the end user, which is you. The client in the demo is Google agent space, but it could be any client. It begins with a single agent called the remote agent. Based on the task you give it, this agent looks for other agents to hand the task over to.
This is where the A2A protocol steps in. It enables smooth communication between two AI agents.
What is an AI Agent? They’re just large language models with a set of tools. These tools define what the agent can do.
In this new protocol, every agent has an agent card that describes its abilities. The remote agent reads the agent cards of other agents and picks the one best suited for the task. That agent can then pass the task on to another agent, creating a chain. This forms a multi-agent workflow. This is how A2A makes the process much easier. By connecting agents from any framework, once this protocol is in place, those agents can talk to each other without friction.
Example: Hiring a Software Engineer
The example provided shows the agent-to-agent protocol being used to hire a software engineer along with a job description. You can clearly observe how the A2A protocol functions:
- Initiation: The protocol is initiated, which is visible in the thinking process.
- Discovery: To discover different agents suitable for the task, it examines their agent cards. These are the main source for understanding the capabilities of each agent. There are several ways to explore agent cards; in this case, there’s a tool registry where it finds the sourcing agent and initiates a call to it.
- Constraints: Additional constraints have also been provided to the agent to find the best possible candidate.
- Execution: Once the sourcing agent completes its task, we can see it identifies five candidates for the job.
- Follow-up: Two weeks later, after the interviews are done, the agent is used again to gather updates and perform background checks based on the candidate data. The system is capable of running a background check on a single candidate.
This entire process of hiring a software engineer based on a job description is automated using these agents. The most important thing is that this wasn’t handled by a single agent. The A2A protocol allowed multiple agents to work together, all communicating through a single protocol.
A2A and MCP: Working Together
Let’s clear up another concept about the agent-to-agent protocol and MCP. A2A is meant to work with MCP, and even Google has confirmed this.
To explain why, think of it like this: MCP is an LLM with tools or access to specific data. Picture it as a repairman. He has a screwdriver and the knowledge to fix cars—that’s the MCP part. But this repairman also needs to talk to others. Maybe he needs to speak with clients or borrow parts from someone else. That’s where the agent-to-agent protocol comes in. It allows agents to communicate with each other. These agents could even be separate MCP servers acting as independent agents. They can share tools or request help when needed.
The key to all of this is the agent card. It defines what each agent is capable of and helps them interact in a structured way.
A clear connection between A2A and MCP is explained in the official documentation. It’s outlined that future Agentic applications will need both of these protocols to become more capable. As an example, they use the same auto repair shop analogy. There are multiple handyman sub-agents working in the shop. They need tools, but to use those tools properly, they also need extra context from customers. Those customers could be other agents.
The interesting part is how MCP fits into this setup. We know that to identify an agent using the A2A protocol, it must have an agent card. These agent cards can be listed as resources. The MCP server can then provide access to them. The LLM fetches these agent cards and passes them to sub-agents. Sub-agents read the cards, and based on the information, the LLM decides which external agent should be used. It’s a clever integration and shows how both systems can work together in a flexible way.
The Anatomy of an Agent Card
The agent card structure is clearly defined in the documentation. It includes key information about the AI agent:
version,name,description: Basic identification and purpose.intended use: The specific goal of the agent.skills: The core capabilities the agent can perform.default content type: The data format the agent supports.parameters: The kind of input the agent needs.authentication: Specific document details if authentication is required.
When an LLM or another agent tries to access this agent, it first reads the agent card. Based on that, it decides whether to use the agent and how to interact with it. This makes the accuracy of the agent card critical to how the entire agent-to-agent system functions.
Sample Usage and Methods
Further in the documentation, sample agent cards and methods for sending tasks and receiving responses are provided.
Example: Google Maps Agent One example is a Google Maps agent. The card includes a clear description of the tasks it can perform, along with the URL and provider details. It also specifies the type of authentication the agent needs.
Below that, there’s a format showing how a client can send a task to a remote agent to start a new job. In one simple example, the task is to tell a joke.
{
"task": "Tell me a joke."
}
The response comes back as a text output from a model, which delivers the joke.
{
"output": "Why don't scientists trust atoms? Because they make up everything!"
}
This is one way to send a task and get a result. Other methods are also documented. To get started, there’s no need to memorize the syntax. You can feed the documentation into a tool like Cursor with its @docs feature, which will pick up the context and generate code accordingly.
Getting Started with a Sample Agent
In the GitHub repo, they’ve included some sample agents that show how A2A agents can be implemented. One example is using Crew AI: a simple image generation agent that uses the Google Gemini API.
It’s a basic agent that just runs on the A2A protocol. To get started, you need to clone the full GitHub repo because the commands depend on that structure. Once it’s cloned, you can run the setup using a few simple commands. Just copy and paste them into your terminal; they’re very straightforward.
Once you run the command, it opens a command-line interface for the A2A agent. The server passes the task to the crew agent, which uses the Gemini API to generate an image. That image is then returned to the server and finally back to the client.
This is a simple implementation, and it’s not widely used just yet, but it will be. We saw the same thing happen with MCP; adoption took some time, then it exploded. The same will happen here. Once people start building on top of it, AI agents will become incredibly powerful. They’ll automate a huge amount of work and change how we use AI every day.