Techniques

Prompt Chaining

Splitting a complex task into a sequence of smaller prompts where each step's output feeds the next -- versus asking one mega-prompt to do everything.

First published April 14, 2026

Prompt chains do well what single mega-prompts do poorly: narrow, verifiable steps. Each prompt has one job, its output has a clean shape, and you can validate between steps.

Classic three-step chain: extract → transform → generate. Example: extract entities from a document, transform them into a structured record, generate a summary using the record. Each step is cheaper, more reliable, and more debuggable than a single mega-prompt asking for the final output directly.

Example Prompt

Step 1 prompt:
"Extract all company names from this article as a JSON list."

Step 2 prompt (feeds Step 1 output + original article):
"For each company in this list, find one sentence in the article that mentions them. Return as JSON: [{company, evidence}, ...]"

Step 3 prompt (feeds Step 2 output):
"Summarize the article in 3 sentences. Reference companies only if they appear in this evidence list."

When to use it

  • The end task decomposes naturally into narrow sub-tasks
  • You need to validate or modify intermediate results
  • A single prompt keeps failing on one specific step
  • You want different models for different steps (cheap model for extract, smart model for generate)

When NOT to use it

  • The sub-tasks are so coupled that splitting loses context the model needs
  • Latency matters and the chain adds 3-5x round-trip time
  • Each step works in isolation but the full chain accumulates error