Reference
Prompt Engineering Glossary
Every term you need, with working example prompts and practical notes on when each technique earns its keep.
3 terms starting with "P" in Security
Security
Prompt Injection
An attack where user-controlled or third-party content smuggles new instructions into an LLM's context, overriding or subverting the system prompt.
SecurityPrompt Injection Defense
A layered set of mitigations -- input filtering, output constraints, privilege boundaries, behavioral monitoring -- that reduce prompt injection impact. No single defense is sufficient.
SecurityPrompt Leaking
An attack that extracts the system prompt, tool definitions, or other hidden context from a deployed LLM -- exposing proprietary prompt IP, credentials, or hints at injection vectors.
