As AI systems continue to evolve, their ability to hold human-like conversations has reached impressive levels. From chatbots to personal assistants, natural language generation is at the heart of many modern applications. But what happens when we need these systems to do more than just talk? What if we want them to interact with structured systems and making external impacts, like calling an API, running a database query, or performing a calculation?
That’s where tool calling comes in.
Tool calling (also referred to as function calling) is a powerful technique that allows language models to take action by invoking external tools in a structured and predictable way. Instead of generating a string of text, a model can return a response that includes specific input parameters for a defined tool, and that tool can then do real work based on those inputs.
Before we dive in, here’s something you’ll love:
We are currently working on Langcasts.com, a resource crafted specifically for AI engineers, whether you're just getting started or already deep in the game. We'll be sharing guides, tips, hands-on walkthroughs, and extensive classes to help you master every piece of the puzzle. If you’d like to be notified the moment new materials drop, you can subscribe here to get updates directly.
This guide will walk you through the basics of tool calling using LangChain, a framework designed to help you build advanced language model applications. Whether you're just starting out or already building with LLMs, this article will give you a clear, hands-on understanding of how tool calling works, why it matters, and how to use it effectively in your projects.
What is Tool Calling?
Tool calling is about giving language models the ability to do things, not just say things.
Most people interact with language models through plain text: you type in a question, and the model responds in natural language. This works well for conversations, summaries, or creative writing. But when you want your model to interact with structured systems, like calling a weather API, running a search, or performing a calculation, you need a different kind of output.
Instead of just generating text, the model can return a structured request that matches a predefined tool, essentially a function that it knows how to call.
Think of it like this:
When you ask, “What’s 2 multiplied by 3?”, a regular model might just reply: “6.”
With tool calling, the model instead says: “I need to call the multiply tool with a=2 and b=3,” and then the system executes the actual function to return the result.
This approach makes the model much more useful in real-world applications—especially when the goal isn’t just conversation, but action.
In LangChain, tool calling enables your model to:
- Work with custom functions you define
- Respect schemas that ensure inputs are valid
- Seamlessly call tools only when relevant
The model doesn't always use a tool, it decides when it's appropriate. If a user says “Hello,” it might just respond in kind. But if the user asks for something that matches a tool’s purpose, the model will switch into action mode.
Tool calling turns a smart model into a smart agent, one that understands when to speak, and when to act.
Prerequisites
Before you dive into tool calling with LangChain, it helps to understand a few basic concepts. Having them in your toolkit will make everything else easier to follow.
Tools
In LangChain, a tool is essentially a function that the model can call. Each tool has a clear purpose, a name, a description, and a schema that defines what kind of inputs it expects. You can think of a tool as a bridge between the model and some action you want to perform, like fetching data or performing a calculation.
Schemas
Schemas help define the shape of the data the tool expects. They describe the exact input format using a library like zod
. This ensures the model knows what kind of data to provide, so the tool can work properly.
Chat Models
Tool calling only works with models that support it. These are usually chat-based models like OpenAI’s GPT-4 or Anthropic’s Claude. LangChain lets you easily connect these models and add tool-calling capabilities.
Basic Coding Knowledge
To follow along with examples, you should be comfortable reading and writing basic JavaScript or TypeScript. Most LangChain examples are written in these languages, although Python versions are also supported.
Once you're familiar with these elements, you're ready to start building. In the next section, we’ll look at how tool calling actually works under the hood.
Anatomy of Tool Calling
Tool calling in LangChain follows a simple but powerful pattern. It happens in four main steps: creating the tool, binding it to a model, letting the model decide when to use it, and finally executing the tool.
Let’s walk through each step.
1. Tool Creation
This is where you define the tool you want the model to use. A tool is just a function, but it also needs a name, description, and a schema for its input. LangChain provides a tool()
helper to make this process easy.
const getWeather = tool(
async ({ city }) => {
// Placeholder logic – in a real case, you'd fetch from a weather API
return `The current weather in ${city} is sunny and 28°C.`;
},
{
name: "get_weather",
description: "Get the current weather for a specified city",
schema: z.object({
city: z.string(),
}),
}
);
Now the model knows this tool exists, what it’s called, what it does, and what kind of inputs it expects.
2. Tool Binding
After defining your tool, you need to connect it to a model that supports tool calling. This is done using .bindTools()
.
const modelWithTools = model.bindTools([getWeather]);
This step gives the model access to the tool. It doesn’t mean the model will always use it. It just means the option is now available.
3. Tool Calling
When a user prompt matches the purpose of a tool, the model can choose to call it. For example:
const response = await modelWithTools.invoke("What's the weather like in Nairobi?");
If the model decides to use the tool, it won’t just return an answer. It will return the name of the tool it wants to use and the input values it came up with:
{
"tool_calls": [
{
"name": "get_weather",
"args": {
"city": "Nairobi"
}
}
]
}
This output gives your application all the information it needs to act on the tool call.
4. Tool Execution
You can now take the provided arguments and pass them directly to the tool to get the final result:
const result = await getWeather.invoke({ city: "Nairobi" });
The output might be something like:
"The current weather in Nairobi is sunny and 28°C."
This four-step process is the backbone of tool calling in LangChain. It’s how your models go beyond conversation and start taking purposeful actions. Next, we’ll walk through a full example so you can see this process in motion from start to finish.
Best Practices for Tool Calling
Tool calling can be incredibly powerful, but like any feature, it works best when used thoughtfully. Here are practical guidelines to help you design tools that your models can understand, select, and use efficiently.
Use Clear and Specific Tool Names
The tool’s name is one of the first things the model sees when deciding which tool to call. Avoid vague names like runTool
or handlerFunction
. Instead, opt for names that reflect exactly what the tool does:
get_weather
, calculate_tax
, send_email
Write Descriptions as if Explaining to a New Teammate
The model relies on tool descriptions to understand purpose. Use plain, descriptive language:
description: "Get the current weather for a specified city."
Avoid overly technical jargon or cryptic summaries. Treat the model as a smart intern that needs guidance.
Keep Tools Narrow in Scope
Tools that do one thing well are easier for models to reason about than tools that try to handle many different tasks.
- Good example: A tool that only gets the weather.
- Problematic: A tool that gets weather, sets calendar reminders, and sends notifications.
If you need more than one action, split it into multiple tools.
Use Well-Defined Schemas
Schemas act like contracts between the model and the tool. Be strict and explicit with your input definitions using Zod or another schema validator:
schema: z.object({
city: z.string(),
})
This reduces ambiguity and helps the model provide clean input.
Avoid Overloading the Model With Too Many Tools
While LangChain supports multiple tools, keep the list manageable. Giving a model 30 tools to choose from increases confusion and the chance of errors.
Start with 3–5 well-scoped tools. Only add more when you're confident the model is handling current tools well.
Favour Models With Native Tool Calling Support
Not all language models are equally good at tool calling. Prefer models that natively support the tool-calling paradigm (e.g., OpenAI GPT-4-turbo with function calling or Anthropic’s Claude with tool use capabilities).
LangChain will still work with basic models, but performance may vary.
Log and Review Tool Call Attempts
Monitoring how your model uses tools in production can highlight edge cases, misfires, or missed opportunities. Use logs to answer:
- Is the model calling tools when it should?
- Are the tool call inputs valid?
- Are users phrasing prompts in unexpected ways?
Following these best practices will help you build a more reliable, intelligent assistant that uses tools like a pro.
Tool calling is a bridge between intelligent models and the real-world systems they can control. By giving AI the ability to interact with tools, APIs, and functions, we unlock new levels of usefulness, precision, and automation.
Whether you’re just starting out or already building production-ready applications, mastering tool calling means you’re building AI that doesn’t just talk, it acts.
The key lies in simplicity: clear tools, focused schemas, and models designed to work in harmony with your logic. With the right setup, your AI agent can go from being a passive responder to an active problem-solver.
And that’s when things really get exciting.
Top comments (0)