One-Shot Prompting
Definition
One-shot prompting is the middle ground between zero-shot (no examples) and few-shot (multiple examples) prompting, providing exactly one demonstration of the target task. A single well-chosen example often provides enough pattern information for the model to understand the desired output format, level of detail, and tone without the additional cost and complexity of multiple examples. One-shot prompting is particularly effective for tasks with a clear, consistent output format where one demonstration is sufficient to establish the pattern—rather than tasks requiring coverage of multiple label types or edge cases.
Why It Matters
One-shot prompting represents the minimum demonstration investment for tasks where zero-shot instructions produce inconsistent formatting or tone. Adding even a single high-quality example often eliminates the most common formatting failures without the curation overhead of selecting and maintaining a diverse few-shot example set. For teams balancing prompt quality against token cost, one-shot prompting offers a favorable tradeoff: the marginal quality improvement from the second and third few-shot examples is often much smaller than the jump from zero to one example.
How It Works
A one-shot prompt structure: (1) task instruction; (2) example input-output pair demonstrating desired format; (3) the actual input awaiting the model's response. The example should be representative of typical inputs (not an unusual edge case), demonstrate the exact output format required, and have a response length consistent with what's expected for production inputs. For classification tasks, one-shot often isn't enough to cover all classes—few-shot with at least one example per class is better. For extraction or formatting tasks, one-shot is often sufficient.
One-Shot Prompting — Single Example, Generalizes to New Input
Prompt (sent to model)
Convert the customer complaint into a structured support ticket with title, priority, and one-line summary.
Example (1-shot)
"I can't log in — it keeps saying my password is wrong even after resetting it."
Title: Login failure after password reset
Priority: High
Summary: User cannot authenticate despite successful password reset.
"The dashboard is blank after I log in. No data shows up at all."
→ ???
Predicted Output
Title: Dashboard blank after login
Priority: High
Summary: User sees an empty dashboard with no data upon successful authentication.
Format generalized from 1 example — no fine-tuning required
Comparison
Real-World Example
A legal tech startup tried zero-shot prompting to generate contract clause summaries. The model produced summaries ranging from 2 sentences to 8 paragraphs with inconsistent structure. Adding one exemplary clause-summary pair in the prompt (a medium-complexity clause with a 3-sentence summary in plain English) brought 87% of summaries to the target length range and consistent plain-English format—without the cost of curating 10+ diverse few-shot examples. The remaining 13% (highly technical IP clauses) were addressed by adding a second example specifically for IP clauses.
Common Mistakes
- ✕Choosing an unrepresentative example—a single example has outsized influence; it must reflect typical inputs, not outliers
- ✕Using one-shot when multiple output classes exist—a single example teaches only the demonstrated class's pattern; multi-class tasks need at least one example per class
- ✕Treating one-shot as always superior to zero-shot—for well-understood tasks with clear natural language instructions, zero-shot is equally good and cheaper
Related Terms
Few-Shot Prompting
Few-shot prompting provides an LLM with a small number of input-output examples within the prompt itself, demonstrating the desired task format and behavior so the model can generalize to new inputs without any fine-tuning.
Zero-Shot Prompting
Zero-shot prompting instructs an AI model to perform a task using only a description of what to do, with no worked examples—relying entirely on the model's pre-trained knowledge to generalize to the request.
Prompt Engineering
Prompt engineering is the practice of designing and refining the text inputs given to AI language models to reliably produce accurate, useful, and well-formatted outputs for specific tasks.
In-Context Learning
In-context learning is the LLM phenomenon of adapting to new tasks purely from examples or instructions provided in the prompt, without updating model weights—including zero-shot, one-shot, and few-shot scenarios.
Prompt Template
A prompt template is a reusable prompt structure with variable placeholders that are filled at runtime—enabling consistent, parameterized AI interactions that can be generated programmatically across many inputs.
Ready to build your AI chatbot?
Put these concepts into practice with 99helpers — no code required.
Start free trial →