ReAct Prompting
Definition
ReAct (Reasoning + Acting), introduced by Yao et al. (2022), is a prompting framework that combines chain-of-thought reasoning with tool use in an alternating loop: Thought → Action → Observation → Thought → Action → ... The model generates a thought explaining its current reasoning, then selects an action (search, calculate, call API), receives an observation (the tool's result), uses that observation to inform the next thought, and continues until the task is complete. ReAct enables LLMs to interact with external tools and environments while maintaining a coherent reasoning trace, forming the foundation of most modern LLM agent frameworks.
Why It Matters
ReAct is the standard pattern for building LLM agents that can interact with external systems. Without a structured reasoning-action loop, LLMs that have access to tools tend to use them haphazardly, fail to incorporate observations into subsequent decisions, or get stuck in repetitive loops. ReAct's explicit thought steps create an interpretable audit trail that shows exactly why the model took each action—making agent behavior debuggable and trustworthy. Most production agent frameworks (LangChain agents, Anthropic tool use, OpenAI function calling loops) implement variants of the ReAct pattern.
How It Works
A ReAct prompt provides: (1) a description of available tools and their signatures; (2) format instructions specifying the Thought/Action/Observation structure; (3) few-shot examples of complete ReAct trajectories. At inference, the model generates a Thought (reasoning), then an Action (tool call specification), then the system executes the tool and appends the Observation to the context. This Action-Observation pair is added to the conversation history and the model generates the next Thought. The loop continues until the model generates a Final Answer action. The observation at each step provides ground truth feedback that grounds subsequent reasoning.
ReAct Prompting — Thought → Action → Observation Loop
Loop pattern
Real-World Example
A customer support agent built on ReAct can answer complex account queries that require multiple data lookups. When a user asks 'Why did my bill increase this month?', the agent thinks: 'I need to compare current and previous invoices.' It calls get_invoice(month='current'), receives the invoice data, thinks 'I see a new premium feature charge. Let me check when this was added,' calls get_subscription_changes(account_id=...), observes that the premium tier was activated 3 weeks ago, thinks 'I should also check if this was user-initiated or automatic,' calls get_activity_log(), and synthesizes the complete answer from all three observations—a task impossible without the ReAct tool-use loop.
Common Mistakes
- ✕Not providing clear tool descriptions—the model's tool selection quality is entirely dependent on how well tools are described in the prompt
- ✕Ignoring the observation quality—if tool outputs are verbose or poorly formatted, the model struggles to extract the relevant information
- ✕Allowing unbounded reasoning loops without a maximum step limit—without a stopping condition, ReAct agents can loop indefinitely on difficult problems
Related Terms
Chain-of-Thought Prompting
Chain-of-thought prompting instructs an LLM to show its reasoning step by step before giving a final answer, significantly improving accuracy on complex reasoning, math, and multi-step problems.
Prompt Chaining
Prompt chaining connects multiple LLM calls sequentially where each step's output becomes the next step's input, enabling complex multi-stage tasks that exceed what any single prompt can accomplish reliably.
Tool Use
Tool use is the broader capability of LLMs to interact with external systems—executing code, browsing the web, querying databases, reading files—by calling tools during generation to retrieve information or take actions.
Prompt Engineering
Prompt engineering is the practice of designing and refining the text inputs given to AI language models to reliably produce accurate, useful, and well-formatted outputs for specific tasks.
Function Calling
Function calling enables LLMs to request the execution of predefined functions with structured arguments, allowing AI systems to interact with external APIs, databases, and tools rather than just generating text.
Ready to build your AI chatbot?
Put these concepts into practice with 99helpers — no code required.
Start free trial →