Reference
Prompt Engineering Glossary
Every term you need, with working example prompts and practical notes on when each technique earns its keep.
8 terms in Security
Guardrails
Programmatic checks -- before, during, or after LLM generation -- that enforce policy, block unsafe outputs, or validate shape / content independent of what the model would produce on its own.
SecurityIndirect Prompt Injection
An attack where malicious instructions are embedded in content the model fetches (webpages, documents, emails) rather than typed directly by the attacker into the prompt.
SecurityJailbreak
A prompt that bypasses a model's safety training to elicit content or behavior the model was aligned to refuse.
SecurityOWASP LLM Top 10
A community-maintained catalog of the ten most critical security risks for LLM-integrated applications -- the industry-standard starting point for LLM threat modeling.
SecurityPrompt Injection
An attack where user-controlled or third-party content smuggles new instructions into an LLM's context, overriding or subverting the system prompt.
SecurityPrompt Injection Defense
A layered set of mitigations -- input filtering, output constraints, privilege boundaries, behavioral monitoring -- that reduce prompt injection impact. No single defense is sufficient.
SecurityPrompt Leaking
An attack that extracts the system prompt, tool definitions, or other hidden context from a deployed LLM -- exposing proprietary prompt IP, credentials, or hints at injection vectors.
SecurityRed Teaming
Adversarial testing of an LLM system -- deliberately trying to break it -- to surface safety failures, injection vectors, and misuse paths before real users (or attackers) find them.
