Techniques

Zero-Shot Prompting

Asking a model to perform a task with no examples of the expected output -- relying entirely on the instruction and the model's pretrained knowledge.

First published April 14, 2026

Zero-shot prompting is the default way most users interact with LLMs: you describe the task, the model produces output. No examples are included in the prompt.

Frontier models in 2026 are good enough at following instructions that zero-shot often beats few-shot for well-defined tasks with clear grading criteria. The more your instruction explicitly constrains the output shape, the less you need examples.

Example Prompt

Classify the following customer review as one of: positive, negative, neutral. Return only the label, nothing else.

Review: "Shipping took forever but the product is great."

When to use it

  • The task is well-known (classification, summarization, translation)
  • You have strong instruction clarity
  • You are targeting a frontier model (GPT-5, Claude 4, Gemini 3)
  • You need the prompt to stay short for cost/latency reasons

When NOT to use it

  • The task is domain-specific and the model doesn't know your terminology
  • Output format is unusual or has strict constraints examples would clarify
  • You need consistent style the model can't infer