Qburst Logo
Industries
Solutions
Services
Innovation & Insights
Company
Industries
Solutions
Services
Innovation & Insights
Company
A2A Protocol for Building Multi-Agent Systems
  1. Innovation & Insights
  2. Blog
|
Artificial IntelligenceGenerative AI

A2A Protocol for Building Multi-Agent Systems

Sarath V S
Sarath V S

Latest Posts

  • What Spreadsheets Taught me About the Future of Agentic AI

  • The GCC Evolution: Navigating Strategy and Scale in the AI Era

  • How We Reduced Agent Onboarding Cycles for an Insurance Carrier

  • The Agentic Inbox: How We Solved “Last Mile” of Operational Automation

  • From Scrum to SAFe: Scaling Agile for Complex Teams and Business Agility

At Google Cloud Next 2025 in Las Vegas, Google announced a series of developments that mark a major shift in how AI systems are designed and deployed. Among them is Agent-to-Agent (A2A) protocol, an open standard that enables AI agents to collaborate securely and autonomously. 

Why A2A Protocol?

As AI agents evolve from single-task tools to collaborative entities, robust protocols for coordination and communication between agents have become an imperative. Without a shared protocol, these interactions could be brittle and insecure. A2A addresses this by standardizing how agents work together.

Think of a travel planner made up of agents. One agent could specialize in local weather and event data. Another might handle bookings across airlines and hotels. A third could manage itinerary optimization based on user preferences. These agents, built by different vendors and hosted on various platforms, can discover each other and coordinate in real-time using the A2A protocol.

A travel planner system where two AI agents coordinate using the A2A protocol. One handles weather and location search, the other manages flight and hotel bookings.

How Does It Work?

The A2A protocol defines how AI agents discover each other, share their capabilities, and exchange messages regardless of who built them, what models they use, or where they are hosted. It enables a client agent to delegate a task to a remote agent, which performs the work and returns results.

This interaction unfolds in four stages:

Capability Discovery

Each agent publishes an Agent Card, a machine-readable JSON file, that describes what it can do. When a task arises, the client agent scans available cards to find a remote agent with the right capabilities.

Task Management

Tasks are formalized as structured objects with a defined lifecycle (created, in progress, completed). The client assigns the task, and both agents exchange updates to track its status. Tasks can be short-lived or long-running.

Collaboration

Agents communicate through structured messages that carry context, prompts, or results. These may include intermediate artifacts such as text, images, or data, enabling richer, more coordinated workflows.

User Experience Negotiation

To ensure outputs are rendered correctly across systems, agents divide results into parts (text, image, form, etc.) and describe each part’s type and format. This allows receiving agents to decide how best to present the response.

Every agent can expose only selected capabilities through its Agent Card, limiting what other agents can invoke. Conversations are signed and tracked, and agents can include identity proofs (that is, OAuth tokens or digital signatures) to verify who they are talking to.

Agent Collaboration Example: Weather and Travel Recommendation

To demonstrate how A2A works, let’s build a system with two agents:

  • Weather agent, a lightweight HTTP server built with python_a2a, a messaging abstraction layer. It listens for city names and responds with current temperature and wind data. It uses static coordinates for four cities (London, New York, Paris, Tokyo) and fetches data from the Open-Meteo API.
  • User agent sends a request to the weather agent, receives the weather report, and then uses ollama to query a local LLM for a recommendation.

Dependencies and Setup

Install required packages:

1pip install python-a2a requests ollama
1ollama pull deepseek-r1:1.5b

Download and install Ollama if not already installed.

Create two files in your working directory: 

1weather_agent.py
1user_agent.py

File Structure

weather_agent.py      # The agent providing weather data (server)

1from python_a2a import A2AServer, Message, TextContent, MessageRole, run_server
2import requests
3from python_a2a import skill,agent
4
5
6# Map city names to coordinates for Open-Meteo
7CITY_COORDS = {
8   "london": (51.5074, -0.1278),
9   "new york": (40.7128, -74.0060),
10   "paris": (48.8566, 2.3522),
11   "tokyo": (35.6762, 139.6503)
12}
13
14
15def get_weather(city: str) -> str:
16   city = city.lower()
17   if city not in CITY_COORDS:
18       return "Sorry, I only know the weather for London, New York, Paris, or Tokyo."
19
20
21   lat, lon = CITY_COORDS[city]
22   url = (
23       f"https://api.open-meteo.com/v1/forecast"
24       f"?latitude={lat}&longitude={lon}&current_weather=true"
25   )
26
27
28   try:
29       response = requests.get(url)
30       data = response.json()
31       current = data.get("current_weather", {})
32       temp = current.get("temperature")
33       wind = current.get("windspeed")
34       condition = current.get("weathercode", "unknown")
35
36
37       return f"The current temperature in {city.title()} is {temp}°C with wind speed {wind} km/h."
38   except Exception as e:
39       return f"Failed to fetch weather: {str(e)}"
40
41
42@skill(
43   name="Get Weather",
44   description="Get current weather for a location",
45   tags=["weather", "forecast"],
46   examples="I am a weather agent for getting weather forecast from Open Meteo"
47)
48
49
50@agent(
51   name="Get Weather Agent",
52   description="Get weather information for a specified city",
53   version="1.0.0",
54   url="https://sampledomain.com")
55
56
57class WeatherAgent(A2AServer):
58   def handle_message(self, message: Message):
59       city = message.content.text.strip()
60       reply = get_weather(city)
61
62
63       return Message(
64           content=TextContent(text=reply),
65           role=MessageRole.AGENT,
66           parent_message_id=message.message_id,
67           conversation_id=message.conversation_id,
68       )
69
70
71if __name__ == "__main__":
72   run_server(WeatherAgent(), host="localhost", port=5000)
73

Here’s the JSON representation of the agent card:

1
2{
3  "name": "Get Weather Agent",
4  "description": "Get weather information for a specified city.",
5  "url": "http://localhost:5000",
6  "version": "1.0.0",
7  "skills": [
8    {
9      "name": "Get Weather",
10      "description": "Get current weather for a location",
11      "examples": [
12        "What's the weather in London?",
13        "Show me the weather forecast for Paris.",
14        "I am a weather agent for getting weather forecast from Open Meteo"
15      ],
16      "tags": ["weather", "forecast"]
17    }
18  ]
19

user_agent.py (client agent)

1from python_a2a import A2AClient, Message, TextContent, MessageRole
2import ollama
3
4
5def generate_response_with_llm(city: str, weather_info: str) -> str:
6   prompt = (
7   f"A user is considering traveling to {city}."
8   f"The current weather information is: {weather_info} \n\n"
9   f"Based on this weather, give a natural, friendly, and helpful recommendation "
10   f"on whether it is a good idea to travel there right now."
11   f"If the weather seems dangerous or unpleasant, advise caution or postponement. "
12   f"If it's nice or tolerable, encourage the trip. Keep the tone warm and informative."
13       )
14
15
16   response = ollama.generate(
17       model='deepseek-r1:1.5b',
18       prompt=prompt,
19       options={'temperature': 0.2, 'max_tokens': 2000}
20   )
21
22
23   return response['response']
24
25
26def main():
27   client = A2AClient("http://localhost:5000")
28   city = input("Enter city (London, New York, Paris, Tokyo): ")
29
30
31   msg = Message(content=TextContent(text=city), role=MessageRole.USER)
32   response = client.send_message(msg)
33   raw_weather = response.content.text
34
35
36   print("\n[Raw weather data from agent]:", raw_weather)
37
38
39   friendly_output = generate_response_with_llm(city, raw_weather)
40   print("\n[LLM-enhanced explanation]:", friendly_output)
41
42
43if __name__ == "__main__":
44   main()
45
46
47

Start the weather agent server and the client agent :

1python weather_agent.py
2python user_agent.py

The system will fetch raw weather data from the weather_agent.py and use the model deepseek-r1:1.5b via Ollama to generate a travel recommendation.

The output produced by this A2A-based workflow:

Screenshot of a travel recommendation generated through the A2A protocol. The output advises that travel to Tokyo is generally pleasant but may be challenging due to high winds and humidity. It recommends packing light clothing and an umbrella, and notes that the trip can be enjoyable if travelers are prepared for the conditions.

It suggests that travel to Tokyo is generally pleasant, though high winds and humidity may make it slightly challenging. The recommendation advises packing light clothing and an umbrella, and notes that with minimal preparation, the trip can be enjoyable.

While this implementation is limited to two agents, it is feasible to develop multiple agents capable of supporting a wide range of real-world applications.

For those looking to build with A2A, Google provides an official Python SDK.

A2A and MCP: How They Relate

Model Context Protocol (MCP), introduced by Anthropic, defines how agents can plan and coordinate actions involving external APIs, files, databases, or other LLMs. It allows a single agent to reason more effectively by grounding its decisions in structured data or tool outputs.

A2A is framework-agnostic. It enables agents built with different runtimes or libraries, such as openai-agents, langgraph, or custom implementations, to discover each other, communicate, and collaborate over a shared protocol. This compatibility lowers the barrier for teams to integrate existing tools or frameworks, regardless of their internal architecture.

Latest Posts

  • What Spreadsheets Taught me About the Future of Agentic AI

  • The GCC Evolution: Navigating Strategy and Scale in the AI Era

  • How We Reduced Agent Onboarding Cycles for an Insurance Carrier

  • The Agentic Inbox: How We Solved “Last Mile” of Operational Automation

  • From Scrum to SAFe: Scaling Agile for Complex Teams and Business Agility

Recognized for Growth. Trusted for Impact.

Deloitte Technology Fast 50 India, Winner 2024

Deloitte Fast 50 India, Winner 2024

Dun & Bradstreet

Leading Mid-Corporates of India, 2024

RecognitionImage

Major Contender, QE Specialist Services


Qburst LogoISO
socialLogo
socialLogo
socialLogo
socialLogo
Industries
RetailRealtyHigh-TechHealthcareManufacturing
Solutions
Digital ExperienceIntelligent EnterpriseProduct EngineeringManaged AgentsModernization
Services
Experience DesignDigital EngineeringDigital PlatformsData Engineering & AnalyticsApplied AICloudQuality EngineeringGlobal Capability CentersDigital Marketing
Innovation & Insights
BlogCase StudiesWhitepapersBrochures
Company
LeadershipClientsPartnersCorporate ResponsibilityNews & MediaCareersOur LocationsGrowth Referral
  • Industries
  • Solutions
  • Services
  • Innovation & Insights
  • Company

© QBurst 2026. All Rights Reserved.

Privacy Policy

Cookies & Management

Certifications