Reference

Prompt Engineering Glossary

Every term you need, with working example prompts and practical notes on when each technique earns its keep.

8 terms  in Security

Security

Guardrails

Programmatic checks -- before, during, or after LLM generation -- that enforce policy, block unsafe outputs, or validate shape / content independent of what the model would produce on its own.

Security

Indirect Prompt Injection

An attack where malicious instructions are embedded in content the model fetches (webpages, documents, emails) rather than typed directly by the attacker into the prompt.

Security

Jailbreak

A prompt that bypasses a model's safety training to elicit content or behavior the model was aligned to refuse.

Security

OWASP LLM Top 10

A community-maintained catalog of the ten most critical security risks for LLM-integrated applications -- the industry-standard starting point for LLM threat modeling.

Security

Prompt Injection

An attack where user-controlled or third-party content smuggles new instructions into an LLM's context, overriding or subverting the system prompt.

Security

Prompt Injection Defense

A layered set of mitigations -- input filtering, output constraints, privilege boundaries, behavioral monitoring -- that reduce prompt injection impact. No single defense is sufficient.

Security

Prompt Leaking

An attack that extracts the system prompt, tool definitions, or other hidden context from a deployed LLM -- exposing proprietary prompt IP, credentials, or hints at injection vectors.

Security

Red Teaming

Adversarial testing of an LLM system -- deliberately trying to break it -- to surface safety failures, injection vectors, and misuse paths before real users (or attackers) find them.