Reference
Prompt Engineering Glossary
Every term you need, with working example prompts and practical notes on when each technique earns its keep.
6 terms starting with "P"
Persona Prompting
Instructing the model to adopt a specific role, expertise level, or character -- typically via the system prompt -- to shape tone, depth, and domain framing.
TechniquesPrompt Chaining
Splitting a complex task into a sequence of smaller prompts where each step's output feeds the next -- versus asking one mega-prompt to do everything.
TechniquesPrompt Decomposition
Identifying the sub-problems inside a single prompt and addressing each explicitly -- in one prompt -- rather than asking the model to figure out the structure itself.
SecurityPrompt Injection
An attack where user-controlled or third-party content smuggles new instructions into an LLM's context, overriding or subverting the system prompt.
SecurityPrompt Injection Defense
A layered set of mitigations -- input filtering, output constraints, privilege boundaries, behavioral monitoring -- that reduce prompt injection impact. No single defense is sufficient.
SecurityPrompt Leaking
An attack that extracts the system prompt, tool definitions, or other hidden context from a deployed LLM -- exposing proprietary prompt IP, credentials, or hints at injection vectors.
