Function Calling
Definition
Function calling (also called tool use) is a capability where an LLM, instead of generating a natural language response, outputs a structured JSON object specifying which function to call and what arguments to pass. The calling application then executes the function, passes the result back to the LLM, and the model uses the result to generate a final response. This enables LLMs to take actions: search the web, query databases, call APIs, execute code, read files, or control external systems. The LLM acts as a reasoning layer that decides what information is needed and which tools to use, while the application handles the actual execution. Major providers (OpenAI, Anthropic, Google) support function calling natively in their APIs.
Why It Matters
Function calling transforms LLMs from passive text generators into active agents that can take real-world actions. Without it, an LLM can tell you about the weather but not actually fetch current weather data; it can explain how to query a database but not execute the query. Function calling closes this gap. For 99helpers customers, function calling enables chatbot actions like: looking up a customer's account status in real-time, creating support tickets directly from the conversation, checking product inventory, or scheduling callbacks—all within the natural language interface. This moves chatbots from information-only to action-capable, dramatically expanding their utility.
How It Works
Function calling API flow: (1) define available tools with JSON schema: {name: 'get_account_status', description: 'Look up a customer account', parameters: {type: 'object', properties: {customer_id: {type: 'string'}}, required: ['customer_id']}}; (2) include tools in the API call; (3) if the model decides to use a tool, it responds with finish_reason: 'tool_call' and a tool_call object containing the function name and arguments JSON; (4) the application executes the function and gets the result; (5) the result is sent back to the model in a new message with role: 'tool'; (6) the model generates a final response using the tool result. Multiple tools can be defined; the model chooses which (if any) to call.
Function Calling Flow
Available Tool Definitions
The LLM does not execute functions itself — it outputs a structured JSON call which the application runtime executes, then feeds the result back into context.
Real-World Example
A 99helpers chatbot handles the query 'What's the current status of my support ticket #12345?' Without function calling, the LLM can only say 'I don't have access to ticket information.' With function calling: (1) the model identifies the need for a tool call: get_ticket_status({ticket_id: '12345'}); (2) the application queries the ticketing database; (3) the result {status: 'In Progress', assigned_to: 'Sarah', last_updated: '2 hours ago'} is returned to the model; (4) the model responds: 'Ticket #12345 is currently In Progress and was last updated 2 hours ago—it's assigned to Sarah.' A direct integration delivered via natural language.
Common Mistakes
- ✕Not validating function arguments before execution—an LLM may generate syntactically correct but semantically wrong arguments (e.g., negative quantities, future dates for past events).
- ✕Giving the LLM access to destructive functions (delete, update) without confirmation steps—require human approval or reversibility for consequential actions.
- ✕Defining too many tools in a single prompt—models have limited capacity to reason about large tool sets; group related tools and only include relevant ones per request type.
Related Terms
Tool Use
Tool use is the broader capability of LLMs to interact with external systems—executing code, browsing the web, querying databases, reading files—by calling tools during generation to retrieve information or take actions.
LLM Agent
An LLM agent is an AI system that uses a language model as its reasoning core, autonomously planning and executing multi-step tasks by calling tools, observing results, and iterating until the goal is achieved.
Structured Output
Structured output constrains LLM responses to follow a specific format—typically JSON with defined fields—enabling reliable parsing and integration with downstream systems rather than free-form text generation.
LLM API
An LLM API is a cloud service interface that provides programmatic access to large language models, allowing developers to send prompts and receive completions without managing model infrastructure.
Function Calling
Function calling enables LLMs to request the execution of predefined functions with structured arguments, allowing AI systems to interact with external APIs, databases, and tools rather than just generating text.
Ready to build your AI chatbot?
Put these concepts into practice with 99helpers — no code required.
Start free trial →