DEV Community

Jonathan Huang
Jonathan Huang

Posted on

2

Google Agent Development Kit (ADK) Introduction (4): Google ADK and A2A vs MCP and Traditional APIs

At this stage in my learning, I've paused to reconsider the development differences between the A2A model and the MCP/traditional API models based on experiences from several projects above.

Core Architecture and Development of Google ADK and Agent-to-Agent (A2A)

In 2025, Google launched the Agent Development Kit (ADK), an open-source Python toolkit aimed at simplifying AI agent development. ADK emphasizes modularity and flexibility, allowing developers to build agents with memory, tool access, and coordination features. It integrates well with Google services (e.g., Vertex AI, Gemini) but also supports external models and tools.

The Agent-to-Agent (A2A) protocol, also by Google, standardizes communication between agents. Each agent exposes a /run endpoint and metadata, enabling other agents or systems to send requests and receive responses. A2A solves the interoperability issue among agents from different platforms—acting like a "diplomatic hotline" for direct collaboration.

In ADK, every agent supports A2A by default, making integration seamless. Developers can compose agents into sequential, parallel, or hierarchical workflows. For example, SequentialAgent handles ordered execution of sub-agents. Simple use cases may use a single multi-tool agent (e.g., multi_tool_agent), while complex flows (e.g., meeting_workflow, multi_agent_pm) coordinate multiple agents to achieve their goals efficiently.

Typical Processes of MCP Architecture and Traditional API Calls

The Model Context Protocol (MCP) introduced by Anthropic in 2024 is an open standard often described as the "USB-C interface" of the AI industry. It standardizes how large language models (LLMs) connect with external tools/data sources. MCP specifies how agents (or models) invoke external resources such as database queries, web services, or software APIs through standardized interfaces. In other words, MCP standardizes previously fragmented API integration methods, making interactions between AI agents and software services more consistent. Using MCP, an AI assistant-type agent can conveniently call various tools that implement MCP interfaces—such as querying databases, reading files, or performing web searches—without redesigning communication methods for each new tool.

Typically, MCP-based architectures adopt a "monolithic command mode" to organize application logic. "Monolithic" means primary logic controlled by a central agent (or application), while the command mode refers to the agent sequentially issuing tool invocation commands as needed to complete subtasks. For example, without multi-agent collaboration, a chatbot arranging travel might internally execute: "input validation → flight API call → hotel API call → integrate results → respond to user". This entire process is orchestrated by a single agent or program, directly invoking external APIs and handling responses traditionally. Developers must manually code the logic for each API call, including parameter preparation, error handling, and result parsing. The key characteristic of traditional API call models is that developers fully control workflow details, explicitly executing each step via function calls or HTTP requests.

When employing traditional methods to enable AI models to use tools, two typical approaches emerge: one is similar to OpenAI Function Calling, where models reference predefined functions in conversation, intercepted and executed by external code; the other involves developers explicitly coding sequential logic within applications, feeding model responses at appropriate times. Both methods essentially directly connect models and functional modules via code. Due to the lack of standardization, integration methods between different tools vary significantly, and developers' mental models remain largely function-oriented—viewing AI primarily as an initiator of various library or API calls, with themselves acting as orchestrators arranging linear execution sequences. This approach works well for straightforward tasks but can cause code to become difficult to maintain and scale as complexity grows.

Notably, MCP protocols somewhat enhance the traditional model by allowing some tool integrations to be managed by standardized MCP tool servers. Advanced AI models like Claude can now invoke external knowledge bases or execute programs via MCP interfaces. However, even with MCP, architectures dominated by a single central agent remain monolithic: a single intelligent entity sequentially connects multiple tools. Although MCP creates more consistent interfaces between agents and tools, the overall development process remains traditional, following a "plan → invoke → control results" pattern. Developers must manually dissect complex tasks into steps and code the logic themselves. In short, MCP offers standardized tool integration, enhancing agent interactions with external resources, whereas A2A/ADK introduces an agent collaboration architecture, fundamentally shifting how we organize program logic. Below, I will further detail the technical and conceptual differences between these approaches.

Technical Comparison: Modularity, State Management, Workflow Control, etc.

The following table compares ADK/A2A multi-agent mode with MCP/traditional API monolithic mode across critical dimensions from a technical perspective:

Aspect ADK/A2A Development Mode MCP/Traditional API Mode
Modularity Highly modular: Systems composed of multiple specialized Agents cooperating via standardized interfaces, with clear responsibilities. Low modularity: Typically a single application or agent handling all functions, connected via function calls, with blurred responsibility boundaries and high coupling.
State Management Built-in state management: Framework provides session memory and context management (e.g., ADK's Session Service), allowing agents easy access to shared information. Manual state management: Developers manually maintain context within programs or rely on models' prompt memory, lacking a unified strategy, often causing synchronization issues.
Workflow Control Flexible workflow control: Supports sequential, parallel, hierarchical execution, using built-in scheduling mechanisms such as SequentialAgent to organize multi-step tasks. Linear procedural flow: Developers hard-code execution sequences and logic manually, lacking high-level abstraction, leading to complex and tangled workflows.
Error Handling Granular error handling: Frameworks like ADK provide middleware for centralized management of tool errors, retry strategies, fallback plans, enhancing reliability. Manual error handling: Developers must individually handle each API call error (e.g., try/except), without centralized control. Inconsistent error handling across modules increases complexity.
Scalability Easy expansion: Easily expand functionality by adding Agents or Tools, with standardized interfaces minimally impacting existing systems, akin to adding microservices. Difficult expansion: Monolithic architectures require widespread code modifications when adding new features, becoming unwieldy and costly to maintain as complexity grows.

(Note: MCP and traditional API methods are grouped as one due to similar single-agent structures, despite differences in standardization.)

Summary

The comparison clearly shows that ADK/A2A's multi-agent architecture excels in terms of system cohesion and loose coupling. Developers can construct AI agents akin to microservices, communicating via standardized protocols, each focused clearly on their responsibilities without interfering with others. Conversely, traditional monolithic models entangle functional components tightly within a single process, limiting flexibility and scalability. Modifications typically affect entire workflows significantly. Writing up to this point, the length is getting a bit too long. Next, I will look at how this transition affects the daily development of engineers from the perspective of the developer's mental model and task division.

This article references Simon Liu's introduction to ADK/MCP/A2A and Koyeb's technical blog analysis of the A2A and MCP protocols. The content may contain misunderstandings, but I have done my best to understand and complete this article.

It is worth noting that since these technologies are relatively new (all launched between the end of 2024 and the beginning of 2025), some details may still be under development, especially since the version of the A2A protocol is only 0.1.0, indicating that it may still be in its early stages. As these technologies mature and see wider adoption, more features and best practices may emerge in the future.

Gen AI apps are built with MongoDB Atlas

Gen AI apps are built with MongoDB Atlas

MongoDB Atlas is the developer-friendly database for building, scaling, and running gen AI & LLM apps—no separate vector DB needed. Enjoy native vector search, 115+ regions, and flexible document modeling. Build AI faster, all in one place.

Start Free

Top comments (0)

Sentry image

See why 4M developers consider Sentry, “not bad.”

Fixing code doesn’t have to be the worst part of your day. Learn how Sentry can help.

Learn more

👋 Kindness is contagious

Dive into this thoughtful piece, beloved in the supportive DEV Community. Coders of every background are invited to share and elevate our collective know-how.

A sincere "thank you" can brighten someone's day—leave your appreciation below!

On DEV, sharing knowledge smooths our journey and tightens our community bonds. Enjoyed this? A quick thank you to the author is hugely appreciated.

Okay