Reference

Prompt Engineering Glossary

Every term you need, with working example prompts and practical notes on when each technique earns its keep.

10 terms  in Applied

Applied

Agentic RAG

RAG where the model doesn't passively consume retrieved context, but actively decides what to retrieve, iterates queries, and judges retrieval quality before answering.

Applied

Chunking

Splitting a document into smaller pieces before embedding and indexing -- so retrieval returns coherent, focused passages instead of entire docs.

Applied

Evals

Automated test suites for LLM outputs -- measuring accuracy, safety, or task-specific quality metrics against labeled inputs, so you know when a prompt or model change helps or regresses.

Applied

Grounding

Constraining an LLM's response to information provided in the prompt -- typically retrieved documents -- rather than the model's parametric (trained-in) knowledge.

Applied

Hallucination

An LLM confidently producing content that sounds plausible but is factually wrong, fabricated, or ungrounded -- a fundamental failure mode of all current generative models.

Applied

Hybrid Search

Retrieval that combines sparse (keyword, BM25) and dense (embedding, semantic) signals, typically fusing their rankings into one list -- beats either alone on most real corpora.

Applied

Persona Prompting

Instructing the model to adopt a specific role, expertise level, or character -- typically via the system prompt -- to shape tone, depth, and domain framing.

Applied

Reranker

A model that takes a candidate list of retrieved documents and a query, and produces a more accurate ranking -- applied after initial retrieval (BM25, vector, or hybrid) as a second pass.

Applied

Retrieval-Augmented Generation (RAG)

A pattern where relevant context is retrieved from a knowledge base (vector store, search index, graph) and injected into the prompt before the model generates its answer.

Applied

Semantic Search

Search that matches on meaning (via embeddings) rather than exact keywords -- finding relevant documents even when they share no surface words with the query.