Multi-agent goes beyond supervisor-worker: agents may be peers, may negotiate, may critique each other. Useful when a task benefits from multiple perspectives (e.g. a planner + critic pair, or specialists in domain A, B, C combining findings).
The production tax is real: each agent is a round of model calls, and coordinating them robustly requires a message protocol (LangGraph, Crew AI, bespoke state machine). Failure modes compound -- one agent's hallucination becomes the next agent's input. Worth it only when single-agent plus well-written prompt chain fails.
Example Prompt
Set up a 2-agent review system:
AGENT A (Writer):
- Input: a draft policy document
- Output: a revised version aiming to improve clarity
AGENT B (Auditor):
- Input: the revised document + the original
- Output: a numbered list of any regressions (places the revision lost accuracy or introduced ambiguity)
Loop for up to 3 rounds. Exit early if auditor returns zero regressions.When to use it
- Task benefits from adversarial / complementary perspectives (writer + auditor)
- Specialist knowledge partitions cleanly across agents
- You have tooling to manage inter-agent state and retries
When NOT to use it
- A single agent with a good prompt works -- don't add agents for its own sake
- Latency and cost budget won't tolerate 3-5x the API calls
- The coordination overhead exceeds the quality gain
