Multi-agent systems often fail because agents don't speak the same language. This article explores Google's A2A (Agent-to-Agent) Protocol as the "universal translator" solution. We build "StoryLab," a practical system with three agents (Orchestrator, Creator, Critic) using Python and Ollama, demonstrating how standardizing discovery (Agent Cards) and communication (Message Envelopes) solves the interoperability crisis.Multi-agent systems often fail because agents don't speak the same language. This article explores Google's A2A (Agent-to-Agent) Protocol as the "universal translator" solution. We build "StoryLab," a practical system with three agents (Orchestrator, Creator, Critic) using Python and Ollama, demonstrating how standardizing discovery (Agent Cards) and communication (Message Envelopes) solves the interoperability crisis.

Building Multi-Agent Systems That Communicate Reliably with the A2A Protocol

2025/12/08 17:45

\

The AI landscape is shifting beneath our feet. We've moved past the "God Model" era where one massive LLM tries to do everything into the age of Multi-Agent Systems. We have specialized agents for coding, reviewing, designing, and testing. It's a beautiful vision of digital collaboration.

But there's a problem.

They don't speak the same language. Your Coding Agent speaks JSON-RPC, your Review Agent expects gRPC, and your Design Agent just wants a REST API. It's the Tower of Babel all over again. Instead of a symphony, we get a cacophony of 400 Bad Request errors.​

This is where Google's A2A (Agent-to-Agent) Protocol comes in a universal translator for the AI age.

In this deep dive, we're not just reading documentation. We're going to build A2A StoryLab, a collaborative storytelling system where three distinct AI agents work together to create, critique, and refine stories. It's practical, it's standardized, and it's how you future-proof your AI architecture.


The Architecture: Three Agents, One Mission

To demonstrate the power of A2A, we need a team. A single agent is just a script; a team is a system.

Our StoryLab consists of three specialized roles:

  1. The Orchestrator ("The Director"): The boss. It manages the workflow, sets the goal ("Adapt 'The Tortoise and the Hare' for Gen Z"), and enforces quality gates.
  2. The Creator ("The Artist"): The generative talent. It takes a prompt and spins a yarn. It's creative but needs direction.
  3. The Critic ("The Editor"): The quality assurance. It reads the story, scores it on creativity and coherence, and provides specific feedback for the Creator.

The Workflow of a Request

It starts with a simple user request: "Adapt 'Bear Loses Roar' as a scientist who lost formulas."

The Orchestrator spins up a session and pings the Creator. The Creator drafts a version. The Orchestrator passes that draft to the Critic. The Critic hates it (score: 4/10) and explains why. The Orchestrator passes that feedback back to the Creator.

They iterate. Once the score hits 8/10, the Orchestrator ships the final story.


1. Discovery: The "Business Card" of Agents

In a messy microservices world, finding the right service is half the battle. A2A solves this with Agent Cards. Think of them as a standardized business card that lives at /.well-known/agent.json.

When the Orchestrator needs a writer, it doesn't need to know the internal API schema of the Creator. It just checks the card.

python# src/creator_agent/main.py @app.get("/.well-known/agent.json") async def get_agent_card(): return { "name": "Story Creator Agent", "description": "Creates and refines story adaptations", "url": "http://localhost:8001", "protocolVersion": "a2a/1.0", "capabilities": ["remix_story", "refine_story"], "skills": [ { "id": "remix_story", "name": "Remix Story", "description": "Create a story variation from base story", "inputModes": ["text", "data"], "outputModes": ["text"] } ] }

This simple endpoint allows for dynamic discovery. You could swap out the Creator agent for a completely different model or service, and as long as it presents this card, the system keeps humming.


2. The Envelope: A Universal Standard

How do they actually talk? A2A enforces a strict Message Envelope. No more guessing if the data is in bodypayload, or data.

Here is a real message captured from our StoryLab logs. This is the Orchestrator asking the Creator to get to work:

json{ "protocol": "google.a2a.v1", "message_id": "msg_abc123xyz789", "conversation_id": "conv_def456uvw012", "timestamp": "2025-12-07T10:30:45.123456Z", "sender": { "agent_id": "orchestrator-001", "agent_type": "orchestrator", "instance": "http://localhost:8000" }, "recipient": { "agent_id": "creator-agent-001", "agent_type": "creator" }, "message_type": "request", "payload": { "action": "remix_story", "parameters": { "story_id": "bear_loses_roar", "variation": "scientist who lost formulas" } } }

Why This Matters

Notice conversation_id. This ID persists across the entire back-and-forth between the Orchestrator, Creator, and Critic. In a distributed system, this is your lifeline. It allows you to trace a single user request across dozens of agent interactions.


3. The Code: Bringing It To Life

Talking about protocols is dry; let's look at the implementation. We use Python and FastAPI to build these agents, with Ollama providing local LLM inference for both story generation and evaluation.

The Orchestrator's Loop

This is the brain of the operation. It implements an iterative refinement loop. It doesn't just fire and forget; it mediates a conversation.

python# src/orchestrator/main.py @app.post("/adapt-story") async def adapt_story(request: AdaptStoryRequest): # ... setup session ... for iteration in range(1, MAX_ITERATIONS + 1): # Step 1: Ask Creator to generate (or refine) if iteration == 1: story_result, msg_id = await _call_creator_remix( conversation_id, story_id, variation, session_id ) else: story_result, msg_id = await _call_creator_refine( conversation_id, session_id, current_version, current_story_text, feedback=evaluation ) current_story_text = story_result["story_text"] # Step 2: Ask Critic to judge eval_result, msg_id = await _call_critic_evaluate( conversation_id, session_id, current_story_text, original_id, iteration ) # Step 3: The Quality Gate if eval_result["approved"] and eval_result["score"] >= APPROVAL_THRESHOLD: logger.info(f"✓ Story approved at iteration {iteration}") break return {"story": current_story_text, "score": eval_result["score"]}

This pattern Generate, Evaluate, Iterate is a fundamental building block of agentic workflows. A2A makes it robust because every step is tracked and standardized.


4. The Critic: AI Keeping AI Honest

The Critic agent is interesting because it uses an LLM not to generate, but to analyze. It evaluates the story on four dimensions: Moral Preservation, Structure, Creativity, and Coherence.

python# src/critic_agent/main.py EVALUATION_WEIGHTS = { "moral_preservation": 0.30, "structure_quality": 0.25, "creativity": 0.25, "coherence": 0.20 } async def evaluate_story(message_data: dict): # ... unpack A2A message ... # LLM-powered evaluation eval_result = await ollama_client.evaluate_story( story_text=story_text, original_story=original_story.text, original_moral=original_moral ) # Calculate weighted score overall_score = ( eval_result["moral_preservation"] * EVALUATION_WEIGHTS["moral_preservation"] + eval_result["structure_quality"] * EVALUATION_WEIGHTS["structure_quality"] + eval_result["creativity"] * EVALUATION_WEIGHTS["creativity"] + eval_result["coherence"] * EVALUATION_WEIGHTS["coherence"] ) scaled_score = overall_score * 10.0 # Scale to 0-10 approved = scaled_score >= APPROVAL_THRESHOLD # Return A2A Response return create_response_message(..., payload={"score": scaled_score, "approved": approved})

By separating the Critic from the Creator, we avoid "hallucination myopia," where a model fails to see its own mistakes. It's pair programming, but for AI.


Conclusion: The Era of Interoperability

We are moving towards a world where you will buy a "Research Agent" from one vendor, a "Coding Agent" from another, and a "Security Agent" from a third. Without a standard like A2A, integrating them would be a nightmare of custom adapters.

With A2A, they just… talk.

A2A StoryLab is a proof of concept, but the pattern is production-ready:

  1. Standardize Identity (Agent Cards)
  2. Standardize Envelopes (A2A Protocol)
  3. Trace Everything (Conversation IDs)

The future of AI isn't a bigger model. It's a better team.

Resources

  • A2A StoryLab GitHub
  • Google A2A Protocol Spec

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Solana Price Stalls as Validator and Address Counts Collapse

Solana Price Stalls as Validator and Address Counts Collapse

The post Solana Price Stalls as Validator and Address Counts Collapse  appeared on BitcoinEthereumNews.com. Since mid-November, the Solana price has been resonating within a narrow consolidation of $145 and $125. Solana’s validator count collapsed from 2,500 to ~800 over two years, raising questions about economic sustainability. The number of active addresses on the Solana network recorded a sharp decline from 9.08 million in January 2025 to 3.75 million now, indicating a drop in user participation. On Tuesday, the crypto market witnessed a notable spike in buying pressure, leading major assets like Bitcoin, Ethereum, and Solana to a fresh recovery. However, the Solana price faced renewed selling at $145, evidenced by a long-wick rejection in the daily candle. The headwinds can be linked to networks facing scrutiny following a notable decline in active validators and active addresses.  Validator Exodus Exposes Economic Pressure on Solana Operators The layer-1 blockchain Solana has witnessed a sharp decline in the number of its validators from 2,500 in early 2023 to around 800 in late 2025, according to Solanacompass data. The collapse has caused an ecosystem divide between opposing camps. One side lauds the trend, arguing that the exodus comprises nearly exclusively unreal identities and poor-quality nodes that were gaming rewards without providing real hardware and uptime. In their view, narrowing the list down to a smaller number of committed validators strengthened the network rather than cooled it down. Infrastructure providers that work directly with node operators have a different story to tell. Teams like Layer 33, which is a collective of 25 independent Solana validators, say, “We personally know the teams shutting down. It is not mostly Sybils.” These operators cited increasing server costs, thin staking yields because of commission cuts, and increasing complexity of keeping nodes profitable as reasons for shutting down. Both sides agree on one thing: raw validator numbers don’t tell us much in and of…
Share
BitcoinEthereumNews2025/12/10 12:05
Surges to $94K One Day Ahead of Expected Fed Rate Cut

Surges to $94K One Day Ahead of Expected Fed Rate Cut

The post Surges to $94K One Day Ahead of Expected Fed Rate Cut appeared on BitcoinEthereumNews.com. What started as a slow U.S. morning on crypto markets has taken a quick turn, with bitcoin BTC$92,531.15 re-taking the $94,000 level. Hovering just above $90,000 earlier in the day, the largest crypto surged back to $94,000 minutes after 16:00 UTC, gaining more than $3,000 in less than an hour and up 4% over the past 24 hours. Ethereum’s ether ETH$3,125.08 jumped 5% during the same period, while native tokens of ADA$0.4648 and Chainlink LINK$14.25 climbed even more. The action went down while silver climbed to fresh record highs above $60 per ounce. While broader equity markets remained flat, crypto stocks followed bitcoin’s advance. Digital asset investment firm Galaxy (GLXY) and bitcoin miner CleanSpark (CLSK) led with gains of more than 10%, while Coinbase (COIN), Strategy (MSTR) and BitMine (BMNR) were up 4%-6%. While there was no single obvious catalyst for the quick move higher, BTC for weeks has been mostly selling off alongside the open of U.S. markets. Today’s change of pattern could point to seller exhaustion. Vetle Lunde, lead analyst at K33 Research, pointed to “deeply defensive” positioning on crypto derivatives markets with investors concerned about further weakness, and crowded positioning possibly contributing to the quick snapback. Further signs of bear market capitulation also emerged on Tuesday with Standard Chartered bull Geoff Kendrick slashing his outlook for the price of bitcoin for the next several years. The Coinbase bitcoin premium, which shows the BTC spot price difference on U.S.-centric exchange Coinbase and offshore exchange Binance, has also turned positive over the past few days, signaling U.S. investor demand making a comeback. Looking deeper into market structure, BTC’s daily price gain outpaced the rise in open interest on the derivatives market, suggesting that spot demand is fueling the rally instead of leverage. The Federal Reserve is expected to lower…
Share
BitcoinEthereumNews2025/12/10 11:51