What is a System Prompt?
An instruction block that 'configures' an LLM before the user starts chatting — it sets the AI's persona, format, and limits.
A System Prompt is a block of instructions placed BEFORE every conversation with an LLM. It “configures” the model’s persona, behavior, output format, and limits. The user never sees it, but it constrains the model throughout the entire session.
How it differs from a user prompt
[System prompt] ← you (the developer) write this, hidden from the user
"You are an assistant for Vietnam Airlines, only answering about flight tickets..."
[User message] ← the user types this
"Buy a Hanoi - Saigon ticket for tomorrow"
[Assistant reply]
"To help you, I need to know..."
The system prompt is FIXED. The user prompt changes every turn.
What a system prompt can do
1. Assign a persona/role
“You are a high school math tutor — patient, using examples relevant to Vietnamese students…“
2. Set the output format
“Always reply in JSON with the schema { reply, confidence, sources }“
3. Limit the topic
“Only answer questions about product A. For any other topic → ‘I only support questions about A‘“
4. Inject background knowledge
“Here is context about the company: […]. Answer based on this.”
5. Tone & style
“Friendly, use emoji moderately, avoid technical jargon…”
What a system prompt CANNOT do
- Not truly secret: users can trick the model into leaking the system prompt via jailbreak
- Not a substitute for authentication: never put tokens or secrets in the system prompt
- Not 100% obeyed: the model can forget or be overridden by the user (you need external guardrails)
Real-world examples
Claude/ChatGPT
Every time you chat, there is a default system prompt from OpenAI/Anthropic underneath, describing who the model is, today’s date, and what it shouldn’t do.
Custom GPT, Claude Project
You write the system prompt for your own bot. This is the bot’s “soul.”
API
response = client.messages.create(
model="claude-sonnet-4-7",
system="You are an expert in Vietnamese law. Answer concisely and cite specific articles.",
messages=[
{ "role": "user", "content": "Where is the indefinite-term labor contract regulated?" }
]
)
Best practices for writing system prompts
- Be specific, not vague (“answer ≤ 200 words” instead of “concisely”)
- Provide few-shot examples in the system prompt for tricky tasks
- List edge cases and how the model should handle them
- Test with adversarial input: try to steer the model to other topics → does it keep the persona?
- Don’t bloat it — every token costs you on every request
Related
- Prompt Engineering
- Jailbreak — how users break system prompts