
Your Agent Card Is a Prompt: Writing A2A Descriptions That Win Delegation From Other Agents
ADK 1.0 and the A2A protocol turned your agent's metadata into the most important prompt you're not optimizing
The A2A protocol hit 150+ organizations under the Linux Foundation in its first year. Google ADK 1.0 shipped GA across four languages at Cloud Next 2026. IBM's competing ACP protocol folded into A2A voluntarily. The standard for how agents talk to each other is settled.
And the single most important field in that entire standard is a text string called description.
Other Agents Are Reading Your Resume
When an ADK orchestrator receives a user request, it doesn't run through a routing table. It reads the description field of every registered sub-agent and asks its LLM: "Which of these agents should handle this?"
Your agent's description is a prompt. It's being evaluated by another model, in another system, that you don't control. And that model is deciding whether your agent gets work or gets skipped.
Google's own documentation says it plainly: "While the description is optional for standalone agents, it becomes critical in multi-agent systems." Their example of a good description is "Handles inquiries about current billing statements" versus "Billing agent". One tells the routing LLM what the agent actually does. The other tells it nothing.
Most developers are writing the second one.
The Agent Card Spec
In the A2A protocol, every agent publishes an Agent Card at /.well-known/agent-card.json. This is the metadata packet that other agents read before deciding to delegate work. Here's what the key fields look like:
{
"name": "invoice-reconciliation-agent",
"description": "Reconciles purchase orders against vendor invoices, flags mismatches over $50, and generates exception reports in CSV format",
"url": "https://agents.example.com/invoice-recon",
"skills": [
{
"id": "three-way-match",
"name": "Three-Way Invoice Match",
"description": "Compares PO, goods receipt, and invoice line items. Returns matched/unmatched/exception categories with confidence scores.",
"tags": ["accounting", "procurement", "reconciliation"],
"examples": [
"Match PO-2024-8891 against invoice INV-44521",
"Find all unmatched invoices from vendor Acme Corp this quarter"
]
}
],
"defaultInputModes": ["text/plain", "application/json"],
"defaultOutputModes": ["application/json", "text/csv"]
}
Every one of those fields is a prompt fragment that a routing LLM will consume. The description, the skills[].description, the examples, the tags. They all end up in the orchestrator's context window when it decides who gets delegated what.
Bad Descriptions Lose Delegation
Here's what most Agent Cards look like in practice:
{
"name": "data-agent",
"description": "Handles data processing tasks",
"skills": [
{
"id": "process",
"name": "Process Data",
"description": "Processes data"
}
]
}
Put yourself in the position of a routing LLM. You have a user request: "Pull last quarter's revenue by region and flag any drops over 15%." You have five agents to choose from. One says "Handles data processing tasks." Another says "Queries financial databases, computes period-over-period deltas by segment, and flags anomalies against configurable thresholds." Which one gets the job?
The routing LLM is pattern-matching on semantic relevance. Vague descriptions lose to specific ones every time.
Writing Descriptions That Win
The same principles that make prompts effective make Agent Card descriptions effective. Be specific about inputs, outputs, domain, and constraints.
The Prompt (ADK sub-agent definition):
billing_agent = Agent(
model="gemini-2.5-flash",
name="billing_support",
description=(
"Resolves billing questions for SaaS subscription customers. "
"Can look up invoices by date range or invoice ID, explain "
"line-item charges, process refund requests under $500, and "
"generate PDF billing summaries. Does NOT handle plan upgrades "
"or cancellations."
),
)
Why This Works: The description tells the routing LLM four things: who this agent serves (SaaS subscription customers), what it can do (lookup, explain, refund, summarize), the constraints on its authority ($500 refund cap), and what it explicitly can't do (upgrades, cancellations). That last part is just as important. Negative boundaries prevent misrouted requests from wasting tokens and producing bad results.
Expected Output (routing behavior):
When a user says "Why was I charged $89 last month?", the orchestrator routes to this agent immediately. When someone says "I want to upgrade to the enterprise plan," the orchestrator knows to skip it and look for a plan-management agent instead.
Compare that to description="Billing agent". The routing LLM has to guess. And guessing means hallucinating capabilities your agent doesn't have.
The Google Purchasing Concierge Pattern
Google's canonical A2A tutorial shows the pattern in full. A purchasing concierge agent discovers remote seller agents by fetching their Agent Cards. Those cards get injected directly into the orchestrator's system prompt. The orchestrator's own instructions say: "You are an expert purchasing delegator that can delegate the user product inquiry and purchase request to the appropriate seller remote agents."
The routing decision is pure LLM reasoning over natural language descriptions. No routing tables. No if-else chains. The orchestrator reads the Agent Cards like a hiring manager reads resumes, then picks the best fit.
This means every word in your Agent Card's description is competing against every other agent in that context window. You're writing a pitch, not documentation.
The Fields Everyone Leaves Empty
Most Agent Cards fill in name and description and stop there. The fields that actually differentiate you in a multi-agent environment are the ones people skip.
skills[].examples -- These are few-shot examples for the routing LLM. When you include "Match PO-2024-8891 against invoice INV-44521" as an example, you're showing the orchestrator exactly what kind of request maps to this skill. Few-shot examples work in Agent Cards for the same reason they work in any prompt: they reduce ambiguity.
tags -- Registries are coming. Agent-Reg and similar projects are already building searchable indexes with metadata filtering and vector search over descriptions. Tags will function like keywords in search ranking. Pick them deliberately.
defaultInputModes and defaultOutputModes -- If your agent accepts application/json and returns text/csv, say so. Orchestrators that need structured output will prefer agents that explicitly declare it over agents that might or might not handle it.
The Prompt (A2A skill with full metadata):
{
"id": "threat-report",
"name": "Threat Intelligence Report",
"description": "Generates a structured threat intel report from IOCs. Accepts IP addresses, domains, file hashes, or CVE IDs. Returns STIX 2.1 bundles with confidence scoring and MITRE ATT&CK mapping.",
"tags": ["cybersecurity", "threat-intel", "STIX", "MITRE"],
"examples": [
"Generate a threat report for IP 203.0.113.42",
"Map CVE-2026-1234 to ATT&CK techniques",
"Correlate these 15 IOCs and identify the threat actor"
]
}
Why This Works: The description names specific input formats (IPs, domains, hashes, CVEs), the output standard (STIX 2.1), and two concrete capabilities (confidence scoring, ATT&CK mapping). The examples show three different usage patterns at increasing complexity. A routing LLM reading this knows exactly when to delegate here and when not to.
Signed Cards and the Trust Problem
A2A v1.0 introduced cryptographic signing for Agent Cards. This matters because in decentralized discovery, anyone can publish a card claiming to be anything. A malicious agent could publish a description optimized to attract sensitive financial data, then exfiltrate it.
Signed cards let orchestrators verify the publisher's identity before delegating. If you're building agents for production multi-vendor environments, card signing isn't optional. It's the difference between a verified business listing and a burner account.
The Registry Race Is Your SEO Window
Right now, A2A discovery is flat. You either know an agent's .well-known/agent-card.json URL or you don't. But active proposals (GitHub Discussion #741 in the A2A repo) are pushing for standardized registry APIs. Agent-Reg is already building searchable indexing with vector search over descriptions.
When registries mature, your Agent Card description becomes your search ranking. The same way a well-written meta description outperforms a generic one in search results, a specific and detailed Agent Card will outperform a vague one in agent registries.
The optimization window is right now, while most developers are still writing one-liners.
Why This Matters at Every Scale
ADK 1.0's Event Compaction delivers a 38% reduction in token usage and 18% latency improvement in production benchmarks. Multi-agent orchestration used to be an enterprise-budget-only pattern. It's not anymore.
If you're building agents for clients, each agent's card is its resume in a marketplace where other agents do the hiring. If you're building internal agents, the description field determines whether the orchestrator routes correctly or wastes tokens on mismatched delegation.
Either way, you're writing prompts whether you realize it or not. Write them like they matter.
Start Here
Pick one agent you've built. Open its description field or Agent Card. Ask yourself: if a routing LLM read this alongside four competing agents, would it know when to pick mine and when not to? If the answer is no, rewrite it with specific inputs, outputs, domains, constraints, and at least two example queries.
That's the whole technique. Your agent's metadata is a prompt. Treat it like one.
Want hands-on training on prompt engineering for multi-agent systems? Connect with Kief Studio on Discord or schedule a session.
Training
Want your team prompting like this?
Kief Studio runs hands-on prompt engineering workshops tailored to your stack and workflows.
Newsletter
Get techniques in your inbox.
New prompt engineering guides delivered weekly. No spam, unsubscribe anytime.
Subscribe
