Applied

Grounding

Constraining an LLM's response to information provided in the prompt -- typically retrieved documents -- rather than the model's parametric (trained-in) knowledge.

First published April 14, 2026

Grounding is the goal of RAG: "answer only from the context below, and cite which source each claim came from." A grounded model says "I don't know" when the retrieved context doesn't cover the question, instead of confabulating from training data.

Enforcement is a combination of (a) explicit instruction in the system prompt, (b) citation requirements (force the model to quote and attribute), (c) post-hoc verification (check that cited claims actually appear in the source). Without all three, ungrounded output leaks in under stress. Grounding is what separates "LLM that looks informed" from "LLM you can audit."

Example Prompt

You are a policy research assistant.

Answer using ONLY the documents provided below. If the answer is not
supported by these documents, respond: "I don't have information on
that in the provided sources."

Cite each factual claim inline using [1], [2], etc., referring to the
numbered documents. If you cannot cite a source for a claim, do not
include the claim.

Documents:
[1] {doc_1}
[2] {doc_2}

When to use it

  • Regulated / audited domains (legal, medical, compliance)
  • User-facing apps where hallucination is a material risk
  • Documentation assistants, product support with proprietary data

When NOT to use it

  • Casual / creative use cases where strict grounding is overkill
  • The context is trusted but the user wants synthesis beyond it