Techniques

Chain-of-Thought (CoT)

Prompting the model to produce intermediate reasoning steps before its final answer, improving accuracy on multi-step problems.

First published April 14, 2026

Chain-of-thought shifts a model from "answer directly" to "reason then answer." Classic form: add "Let's think step by step" to the prompt, or show a few-shot example that includes reasoning before the answer.

On reasoning-tuned models (Claude 4.6, GPT-5, o-series), explicit CoT instructions are often redundant -- the model does it internally. On smaller or non-reasoning models, CoT still matters and can double accuracy on math and logic.

Example Prompt

Problem: A shop sells apples at 3 for $2 and oranges at 5 for $3. What does it cost to buy 12 apples and 15 oranges?

Think step by step, then give the final price.

When to use it

  • Multi-step math, logic, or planning problems
  • The task requires combining multiple pieces of information
  • You're working with a smaller or non-reasoning model
  • You need the reasoning trace for debugging or auditing

When NOT to use it

  • Simple lookup or classification -- CoT just adds latency
  • Using a reasoning model that already thinks internally (wastes tokens)
  • Latency-sensitive endpoints where the reasoning isn't user-facing value