Prompting Parallel Coding Agents: Task Decomposition That Actually Ships
How to write prompts for Cursor 3 background agents and Claude Code subagents that run concurrently without stepping on each other
You can spin up eight coding agents right now. Cursor 3 launched with parallel background agents in cloud VMs. Claude Code has Agent Teams with peer-to-peer coordination. Grok Build runs eight agents simultaneously. Windsurf does five.
The hard part was never launching them. The hard part is writing prompts that keep them from colliding on the same files, duplicating each other's work, or producing code that looks correct in isolation and breaks the moment you merge it together.
The Decomposition Problem
A single agent working on a feature touches whatever files it needs. Two agents working on the same feature will both try to modify routes.ts, both add entries to the same config file, and both write overlapping utility functions. Git catches some of these collisions. It misses the semantic ones entirely.
Anthropic's engineering team built a 100,000-line C compiler using 16 Claude agents across roughly 2,000 sessions. Their coordination mechanism was dead simple: each agent writes a text file to claim a task, and git's merge conflicts force the second claimer to pick something else. No orchestration framework. Just file-level ownership and git as the arbiter.
That pattern works because the decomposition was right. Each agent owned a compiler pass or a language feature. The boundaries were architectural, not arbitrary.
Research from Zylos backs this up quantitatively. Separating a high-reasoning planner from cheaper executor agents hit 92% task completion at 3.6x speedup with 90% cost reduction. But those numbers only hold when each executor gets a genuinely independent chunk of work. The same research found that agent success rates drop hard after 35 minutes of autonomous work, and doubling the duration quadruples the failure rate.
Small tasks. Clear boundaries. That's the whole game.
The File Ownership Rule
Every parallel agent prompt needs one thing above all else: an explicit list of files that agent is allowed to touch.
Addy Osmani's research at Google established this clearly. An agent that only sees db.js writes better database code than one that has the full codebase in context. Narrower scope produces higher quality output because the agent isn't spending reasoning tokens navigating irrelevant code.
Here's how this looks in practice. Say you're building a user settings feature with a new API endpoint, a database migration, and a frontend form. Instead of one agent doing all three, you decompose it into three prompts with strict file ownership.
## Agent 1: Database Migration
You are building the database layer for a user-settings feature.
**Files you own (create or modify ONLY these):**
- src/db/migrations/004_user_settings.sql
- src/db/models/user_settings.py
- src/db/queries/settings.py
**Interface contract (do not implement the consumers):**
- Export a `get_settings(user_id: int) -> dict` function
- Export a `update_settings(user_id: int, settings: dict) -> bool` function
- Settings schema: { theme: str, notifications: bool, timezone: str }
**Do not touch:** Any file outside src/db/. Do not modify existing models.
Do not add routes, views, or frontend code.
This prompt works because it gives the agent three things: a file boundary it cannot cross, an interface contract that downstream agents will code against, and an explicit list of what's off-limits.
Expected output: A SQL migration creating auser_settingstable with the three columns, a SQLAlchemy model, and two query functions matching the interface contract. The agent stays insidesrc/db/and produces code that compiles independently.
The Interface Contract Pattern
File ownership prevents collisions. Interface contracts prevent semantic conflicts.
When two agents work in parallel, neither can see what the other is producing. Agent 1 might return settings as a flat dictionary. Agent 2 might expect a nested object with a preferences key. Both pass their own tests. Both break when merged.
The fix is defining contracts in the prompt before any agent starts coding.
## Agent 2: API Endpoint
You are building the REST endpoint for user settings.
**Files you own:**
- src/api/routes/settings.py
- src/api/schemas/settings.py
**Upstream dependency (already implemented, import directly):**
- from src.db.queries.settings import get_settings, update_settings
- get_settings(user_id: int) -> dict with keys: theme, notifications, timezone
- update_settings(user_id: int, settings: dict) -> bool
**Downstream contract (the frontend will call these):**
- GET /api/settings/{user_id} -> JSON matching the schema above
- PUT /api/settings/{user_id} with JSON body -> 200 on success
**Do not touch:** Database code, frontend code, authentication middleware.
Assume auth middleware already extracts user_id.
The upstream dependency section tells Agent 2 exactly what Agent 1 will produce, even though Agent 1 hasn't finished yet. The downstream contract tells Agent 3 (frontend) what to expect from Agent 2.
You write these contracts. The agents implement to them. This is the planning work that makes parallelism possible.
When Parallelism Costs More Than It Saves
BaristaLabs, a small agency that adopted Codex subagents early, put it bluntly: "Discovery is cheap. Mis-synthesis is expensive."
Their pattern is worth stealing. Use parallel agents for investigation (have three agents simultaneously explore the UI layer, backend, and test suite of unfamiliar code), then use a single agent for the actual implementation once you understand the problem.
For linear, sequential tasks, one agent is faster. The orchestration overhead of decomposing work, writing contracts, and merging branches only pays off when the work is genuinely independent. The practical sweet spot is three to five agents. Beyond that, your review time becomes the bottleneck.
Miguel Grinberg makes this point sharply: even if agents could fix every bug autonomously, you still have to review all that code before it merges. More agents means more code to review in parallel, and you can supervise more agents than you can deeply understand. That gap is where bugs ship.
The Merge Sequence Matters
Parallel execution, sequential merging. This is the pattern that ships clean code.
Cursor 3 runs agents in isolated git worktrees. Claude Code Agent Teams do the same with the isolation: "worktree" flag. Each agent works on its own branch against a clean copy of the repo.
The order you merge those branches matters. Start with the layer that has no dependencies (database), then merge the layer that depends on it (API), then the layer that depends on both (frontend). If you merge frontend first and the API contract changed, you're fixing conflicts in the wrong direction.
Clash, an open-source tool built specifically for this problem, uses git merge-tree to detect conflicts between worktree pairs before any merge happens. It runs three-way merge checks in memory and alerts you to collisions while agents are still working. Worth adding to any multi-agent workflow.
A Realistic Decomposition Checklist
Before you split work across agents, run through this:
- Can each agent's output compile and pass lint independently? If not, the boundary is wrong.
- Does any file appear in more than one agent's ownership list? If yes, extract it into a shared contract or give it to one agent only.
- Are the interface contracts specific enough that two developers could implement both sides without talking to each other? That's the bar.
- Is each agent's task completable in under 30 minutes? If not, break it down further.
- Do you know the merge order? Dependencies flow one direction. Merge in that direction.
Skip this checklist and you'll spend more time fixing merge conflicts than you saved by running agents in parallel. Do it well and three agents genuinely ship in the time one would take.
Where This Is Heading
Anthropic's 2026 data says developers use AI in about 60% of their work, but fully delegate only 0 to 20% of tasks. The gap between "AI-assisted" and "AI-completed" is exactly the decomposition and coordination skill described here. Tools are getting better at isolation (worktrees, sandboxed VMs, file locking). The bottleneck is the human's ability to break problems into agent-sized pieces with clean interfaces.
That's a prompt engineering skill, and it's one worth practicing now.
If your team is figuring out how to run parallel agents without the merge headaches, Kief Studio runs hands-on training sessions covering task decomposition, contract-driven prompting, and multi-agent coordination patterns. Connect with us on Discord (https://discord.gg/JfjyUdjJgP) or book a session at kief.studio/contact.