Every modern chat API separates system prompts from user turns. System prompts get elevated weight: the model treats them as operating rules, not as content to respond to.
Good system prompts define role (who the model is), scope (what it will and won't do), format (how it responds), and guardrails (refusals and fallbacks). They stay short -- long system prompts get summarized or forgotten in long contexts.
Example Prompt
You are a senior SRE assistant for a fintech. Your job:
- Diagnose production incidents from logs and metrics users paste.
- Quote exact log lines when identifying root cause.
- Never propose rm -rf, force-push, or any action that could lose data without explicit user confirmation.
- If you don't know, say so. Don't guess.
Format: structured markdown with Findings, Root Cause, Fix, Rollback.When to use it
- You need consistent behavior across many user turns
- You're defining a product persona or role
- You want to enforce format, tone, or safety rules
- You're deploying to users who won't know to provide the context
When NOT to use it
- One-off tasks where the instruction belongs in the user message
- The "rule" is really a question whose answer varies by input
- You're stuffing context into the system prompt instead of doing retrieval
