Reference

Prompt Engineering Glossary

Every term you need, with working example prompts and practical notes on when each technique earns its keep.

14 terms  in Techniques

Techniques

Chain-of-Thought (CoT)

Prompting the model to produce intermediate reasoning steps before its final answer, improving accuracy on multi-step problems.

Techniques

Few-Shot Prompting

Including 1-5 worked examples of the desired input/output pattern in the prompt so the model can infer the task format and style.

Techniques

In-Context Learning (ICL)

The capability of a pretrained LLM to learn a task at inference time from examples provided in the prompt -- no weight updates, no fine-tuning.

Techniques

Meta-Prompting

Using a language model to write or improve prompts that another model (or the same one) will execute -- treating prompt engineering itself as a prompt engineering task.

Techniques

Negative Prompting

Explicitly stating what the model should NOT do, NOT include, or NOT sound like -- in addition to (or instead of) describing the desired output.

Techniques

Prompt Chaining

Splitting a complex task into a sequence of smaller prompts where each step's output feeds the next -- versus asking one mega-prompt to do everything.

Techniques

Prompt Decomposition

Identifying the sub-problems inside a single prompt and addressing each explicitly -- in one prompt -- rather than asking the model to figure out the structure itself.

Techniques

Role Prompting

Assigning the model a concrete professional role (e.g. "senior security engineer", "copy editor") to shape its framing, vocabulary, and priorities -- distinct from broader persona prompting.

Techniques

Self-Consistency

Generating N independent answers to the same prompt (with temperature > 0) and picking the majority answer -- trading compute for reliability.

Techniques

Step-Back Prompting

Before answering a specific question, asking the model to derive the general principle or high-level abstraction that the question falls under -- then using that principle to answer.

Techniques

Structured Output

Constraining a model to produce JSON, XML, or other machine-parseable output conforming to a schema -- so downstream code can consume it reliably.

Techniques

System Prompt

The high-priority, typically hidden instruction that sets a model's persona, rules, and constraints for an entire conversation.

Techniques

Tree of Thoughts (ToT)

A prompting strategy where the model explores multiple reasoning branches, evaluates each, and selects (or combines) the best -- versus linear chain-of-thought which commits to one path.

Techniques

Zero-Shot Prompting

Asking a model to perform a task with no examples of the expected output -- relying entirely on the instruction and the model's pretrained knowledge.