Prompt Engineering
Prompt engineering has emerged as a critical skill for anyone working with large language models. This category covers prompt design patterns — from zero-shot and few-shot prompting to chain-of-thought reasoning and self-consistency — as well as system prompts, prompt injection risks, and techniques for maintaining consistent model behavior. Good prompts are the difference between an AI that reliably solves problems and one that produces unpredictable results.
35 terms in this category
Adversarial Prompting
Adversarial prompting deliberately crafts inputs designed to cause LLMs to fail, bypass safety measures, or behave unexpectedly—used both maliciously to exploit AI systems and constructively to test and harden them.
Chain-of-Thought Prompting
Chain-of-thought prompting instructs an AI model to show its reasoning step-by-step before giving a final answer, dramatically improving accuracy on complex tasks like math, logic, and multi-step reasoning.
Context Window
The context window is the maximum amount of text—measured in tokens—that an LLM can process in a single interaction, determining how much conversation history, retrieved documents, and instructions the model can consider at once.
Few-Shot Prompting
Few-shot prompting provides an LLM with a small number of input-output examples within the prompt itself, demonstrating the desired task format and behavior so the model can generalize to new inputs without any fine-tuning.
Grounding
Grounding anchors LLM responses to specific, verifiable sources of truth—such as retrieved documents, database records, or real-time data—preventing hallucination by constraining the model to facts it can explicitly reference.
Guardrails
Guardrails are constraints and safety mechanisms applied to AI systems—in prompts, code, or dedicated services—that prevent harmful, off-topic, or policy-violating outputs while preserving the model's usefulness for intended tasks.
Hallucination Mitigation
Hallucination mitigation encompasses prompt engineering and architectural techniques that reduce an LLM's tendency to generate confident but factually incorrect information—through grounding, instructions, retrieval, and verification.
In-Context Learning
In-context learning is the ability of large language models to adapt their behavior based on examples or instructions provided in the prompt at inference time—without updating model weights or performing any training.
Instruction Following
Instruction following is an LLM's ability to accurately understand and execute specific directives given in a prompt—a capability trained through instruction tuning and RLHF that determines how reliably the model does what it is told.
LLM Security
LLM security encompasses the practices, patterns, and tools that protect AI language model applications from attacks—including prompt injection, jailbreaks, data leakage, and abuse—ensuring safe, reliable, and policy-compliant operation.
Meta-Prompting
Meta-prompting uses an LLM to generate, improve, or optimize prompts for another LLM call—automating prompt engineering by treating prompt creation itself as a task that can be delegated to the model.
Negative Prompting
Negative prompting explicitly instructs an LLM what not to do, include, or say—using prohibition instructions to steer away from specific failure modes, unwanted topics, or problematic output patterns.
One-Shot Prompting
One-shot prompting provides a single input-output example in the prompt to demonstrate the desired task format, offering minimal guidance that can dramatically improve formatting consistency over zero-shot instructions alone.
Output Format Control
Output format control uses prompt instructions to specify exactly how an LLM should structure its response—as JSON, markdown, a numbered list, or a custom schema—ensuring outputs are machine-parseable and consistently structured.
Persona
A persona is a defined character identity assigned to an AI assistant—including name, personality traits, communication style, and domain expertise—creating a consistent, branded user experience across all interactions.
Prompt Chaining
Prompt chaining connects multiple LLM calls sequentially where each step's output becomes the next step's input, enabling complex multi-stage tasks that exceed what any single prompt can accomplish reliably.
Prompt Compression
Prompt compression reduces the token count of prompts and retrieved context without losing critical information—cutting inference costs and fitting more relevant content within the context window.
Prompt Engineering
Prompt engineering is the practice of designing and refining the text inputs given to AI language models to reliably produce accurate, useful, and well-formatted outputs for specific tasks.
Prompt Evaluation
Prompt evaluation is the systematic process of measuring how well a prompt performs across a representative test set—using automated metrics, human ratings, or model-as-judge scoring—to make data-driven prompt improvements.
Prompt Injection
Prompt injection is a security attack where malicious content in user input or retrieved data overrides an LLM's system instructions, causing the model to ignore its intended behavior and follow the attacker's instructions instead.
Prompt Leaking
Prompt leaking is a type of attack where a user manipulates an AI model into revealing its hidden system prompt, exposing proprietary instructions, personas, business logic, and constraints intended to be confidential.
Prompt Optimization
Prompt optimization is the systematic process of improving prompt performance through evaluation-driven iteration—testing prompt variants, measuring outcomes on benchmark datasets, and selecting changes that improve accuracy, consistency, or cost-efficiency.
Prompt Template
A prompt template is a reusable prompt structure with variable placeholders that are filled at runtime—enabling consistent, parameterized AI interactions that can be generated programmatically across many inputs.
Prompt Versioning
Prompt versioning tracks changes to prompts over time using version control systems, enabling rollback, A/B testing, audit trails, and safe deployment of prompt changes in production AI applications.
ReAct Prompting
ReAct prompting interleaves reasoning (Thought) and action steps within a prompt loop, enabling LLM agents to plan, use tools, observe results, and refine their approach iteratively to solve multi-step tasks.
Retrieval-Augmented Prompting
Retrieval-augmented prompting dynamically injects relevant documents or facts into the prompt at query time, grounding the LLM's response in current, specific knowledge rather than relying solely on its static pre-trained memory.
Role Prompting
Role prompting assigns a specific persona or expert identity to an AI model within the prompt—such as 'You are an experienced tax accountant'—steering its responses toward domain-appropriate tone, vocabulary, and reasoning style.
Self-Consistency
Self-consistency is a prompting technique that samples multiple independent reasoning chains for the same question and takes the majority answer, significantly improving accuracy over single-sample chain-of-thought by reducing reasoning variance.
Structured Output
Structured output is the practice of constraining an LLM to generate responses that conform to a predefined schema—such as JSON or XML—enabling reliable programmatic parsing and downstream system integration.
System Prompt
A system prompt is the hidden instruction given to an AI model before any user interaction that defines its persona, capabilities, constraints, and behavioral rules—forming the persistent foundation of every conversation.
Temperature
Temperature is a sampling parameter that controls the randomness of LLM outputs—low values produce focused, deterministic responses while high values produce more varied and creative outputs.
Token
A token is the basic unit of text processed by language models—roughly corresponding to a word fragment, whole word, or punctuation mark—and the unit by which LLM API costs are measured and context windows are sized.
Top-P Sampling
Top-P sampling (nucleus sampling) selects LLM output tokens from the smallest set of candidates whose combined probability exceeds a threshold P, providing more adaptive diversity control than temperature alone.
Tree-of-Thought Prompting
Tree-of-thought prompting extends chain-of-thought by having the model explore multiple reasoning branches in parallel, evaluate each branch's promise, and backtrack from dead ends—enabling systematic problem-solving for complex tasks.
Zero-Shot Prompting
Zero-shot prompting instructs an AI model to perform a task using only a description of what to do, with no worked examples—relying entirely on the model's pre-trained knowledge to generalize to the request.