Large Language Models (LLMs)

System Prompt

Definition

A system prompt is the foundational instruction layer in LLM conversations, provided in the 'system' role before any user messages. It persists throughout the conversation and shapes how the model interprets and responds to all subsequent user messages. System prompts define: the assistant's persona and role ('You are a helpful customer support agent for 99helpers'), behavioral constraints ('Never mention competitors by name'), task focus ('Only answer questions about [product]'), output formatting requirements ('Always respond in 2-3 concise sentences'), safety instructions ('For medical/legal questions, recommend professional consultation'), and any relevant context (company information, product details). System prompts are typically not visible to end users and cannot be overwritten by user messages in properly aligned models.

Why It Matters

System prompts are the primary configuration mechanism for LLM-based applications. They transform a general-purpose AI assistant into a specialized, constrained agent tailored to a specific use case. For 99helpers customers, the system prompt is where chatbot personality, knowledge scope, response format, and safety boundaries are defined. A well-crafted system prompt is often the difference between a generic chatbot that sometimes says the wrong thing and a reliable, on-brand assistant that consistently behaves within defined parameters. System prompt quality directly correlates with chatbot quality—it deserves careful iteration and testing equivalent to any other product specification.

How It Works

Effective system prompt structure: (1) role definition—who is the assistant? ('You are Alex, a helpful support agent for HelperApp.'); (2) knowledge scope—what does it know and not know? ('Answer only questions about HelperApp. For other topics, say you can only help with HelperApp questions.'); (3) behavioral guidelines—how should it respond? ('Be concise and friendly. Use bullet points for step-by-step instructions.'); (4) constraints—what should it never do? ('Do not share pricing without directing users to the pricing page.'); (5) escalation—when should it hand off? ('For billing disputes, transfer to human support.'). System prompts can include few-shot examples and reference documents (for RAG-based knowledge injection).

System Prompt — Request Anatomy

API messages array

role: system— set once, developer-controlled

You are a helpful support agent for Acme Corp. Only answer questions about Acme products. Respond in a professional, concise tone. Never discuss competitor products.

role: user— sent by end user at runtime

How do I reset my password?

role: assistant— model response (follows both)

To reset your password, click "Forgot password" on the login page and follow the email instructions.

What the system prompt controls

Persona & tone

e.g. professional, friendly, formal

Topic scope

e.g. only answer about X

Output format

e.g. respond in bullet points

Safety guardrails

e.g. never share pricing

Key point: The system prompt is invisible to end users but shapes every response. It is the primary mechanism for customizing LLM behavior for a specific application.

Real-World Example

A 99helpers chatbot system prompt for a SaaS product: 'You are Maya, a helpful support assistant for [Product]. Your job is to help users with setup, usage questions, troubleshooting, and account management. IMPORTANT: Only answer questions directly about [Product]. For medical, legal, or financial questions, tell the user you cannot help and suggest appropriate resources. Never discuss competitors. If you do not know the answer, say so clearly and suggest contacting support@product.com. Format step-by-step instructions as numbered lists. Keep responses under 200 words unless the question requires detailed explanation.' This 150-token system prompt establishes persona, scope, safety, and formatting.

Common Mistakes

  • Writing an overly restrictive system prompt that refuses many legitimate queries—balance helpfulness and constraints to avoid an assistant that constantly says 'I can't help with that.'
  • Not including output formatting instructions—without explicit formatting guidance, the model's response style varies widely across queries.
  • Treating the system prompt as immutable after initial deployment—system prompts require iteration based on observed failures; establish a process for versioning and updating.

Related Terms

Ready to build your AI chatbot?

Put these concepts into practice with 99helpers — no code required.

Start free trial →
What is System Prompt? System Prompt Definition & Guide | 99helpers | 99helpers.com