How to use a chat model to call tools
This guide assumes familiarity with the following concepts:
We use the term tool calling interchangeably with function calling. Although function calling is sometimes meant to refer to invocations of a single function, we treat all models as though they can return multiple tool or function calls in each message.
Tool calling allows a chat model to respond to a given prompt by “calling a tool”. While the name implies that the model is performing some action, this is actually not the case! The model generates the arguments to a tool, and actually running the tool (or not) is up to the user. For example, if you want to extract output matching some schema from unstructured text, you could give the model an “extraction” tool that takes parameters matching the desired schema, then treat the generated output as your final result.
If you only need formatted values, try the .with_structured_output() chat model method as a simpler entrypoint.
However, tool calling goes beyond structured output since you can pass responses to caled tools back to the model to create longer interactions. For instance, given a search engine tool, an LLM might handle a query by first issuing a call to the search engine with arguments. The system calling the LLM can receive the tool call, execute it, and return the output to the LLM to inform its response. LangChain includes a suite of built-in tools and supports several methods for defining your own custom tools.
Tool calling is not universal, but many popular LLM providers, including Anthropic, Cohere, Google, Mistral, OpenAI, and others, support variants of a tool calling feature.
LangChain implements standard interfaces for defining tools, passing them to LLMs, and representing tool calls. This guide will show you how to use them.
Passing tools to LLMs
Chat models that support tool calling features implement a
.bindTools()
method, which receives a list of LangChain tool
objects
and binds them to the chat model in its expected format. Subsequent
invocations of the chat model will include tool schemas in its calls to
the LLM.
As of @langchain/core
version 0.2.9
, all chat models with tool calling capabilities now support OpenAI-formatted tools.
Let’s walk through a few examples:
Pick your chat model:
- Anthropic
- OpenAI
- MistralAI
- FireworksAI
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/anthropic @langchain/core
yarn add @langchain/anthropic @langchain/core
pnpm add @langchain/anthropic @langchain/core
Add environment variables
ANTHROPIC_API_KEY=your-api-key
Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";
const llm = new ChatAnthropic({
model: "claude-3-sonnet-20240229",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/openai @langchain/core
yarn add @langchain/openai @langchain/core
pnpm add @langchain/openai @langchain/core
Add environment variables
OPENAI_API_KEY=your-api-key
Instantiate the model
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-3.5-turbo",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/mistralai @langchain/core
yarn add @langchain/mistralai @langchain/core
pnpm add @langchain/mistralai @langchain/core
Add environment variables
MISTRAL_API_KEY=your-api-key
Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";
const llm = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/community @langchain/core
yarn add @langchain/community @langchain/core
pnpm add @langchain/community @langchain/core
Add environment variables
FIREWORKS_API_KEY=your-api-key
Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";
const llm = new ChatFireworks({
model: "accounts/fireworks/models/firefunction-v1",
temperature: 0
});
We can use the .bindTools()
method to handle the conversion from
LangChain tool to our model provider’s specific format and bind it to
the model (i.e., passing it in each time the model is invoked). A number
of models implement helper methods that will take care of formatting and
binding different function-like objects to the model. Let’s create a new
tool implementing a Zod schema, then bind it to the model:
The tool
function is available in @langchain/core
version 0.2.7 and
above.
If you are on an older version of core, you should use instantiate and
use
DynamicStructuredTool
instead.
import { tool } from "@langchain/core/tools";
import { z } from "zod";
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({ model: "gpt-4o", temperature: 0 });
/**
* Note that the descriptions here are crucial, as they will be passed along
* to the model along with the class name.
*/
const calculatorSchema = z.object({
operation: z
.enum(["add", "subtract", "multiply", "divide"])
.describe("The type of operation to execute."),
number1: z.number().describe("The first number to operate on."),
number2: z.number().describe("The second number to operate on."),
});
const calculatorTool = tool(
async ({ operation, number1, number2 }) => {
// Functions must return strings
if (operation === "add") {
return `${number1 + number2}`;
} else if (operation === "subtract") {
return `${number1 - number2}`;
} else if (operation === "multiply") {
return `${number1 * number2}`;
} else if (operation === "divide") {
return `${number1 / number2}`;
} else {
throw new Error("Invalid operation.");
}
},
{
name: "calculator",
description: "Can perform mathematical operations.",
schema: calculatorSchema,
}
);
const llmWithTools = llm.bindTools([calculatorTool]);
Now, let’s invoke it! We expect the model to use the calculator to answer the question:
const res = await llmWithTools.invoke("What is 3 * 12");
console.log(res.tool_calls);
[
{
name: "calculator",
args: { operation: "multiply", number1: 3, number2: 12 },
id: "call_QraczsExVCpWmD8mY34BnyFL"
}
]
See a LangSmith trace for the above here.
We can see that the response message contains a tool_calls
field when
the model decides to call the tool. This will be in LangChain’s
standardized format.
The .tool_calls
attribute should contain valid tool calls. Note that
on occasion, model providers may output malformed tool calls (e.g.,
arguments that are not valid JSON). When parsing fails in these cases,
the message will contain instances of of
InvalidToolCall
objects in the .invalid_tool_calls
attribute. An InvalidToolCall
can
have a name, string arguments, identifier, and error message.
Streaming
When tools are called in a streaming context, message
chunks
will be populated with tool call
chunk
objects in a list via the .tool_call_chunks
attribute. A
ToolCallChunk
includes optional string fields for the tool name
,
args
, and id
, and includes an optional integer field index
that
can be used to join chunks together. Fields are optional because
portions of a tool call may be streamed across different chunks (e.g., a
chunk that includes a substring of the arguments may have null values
for the tool name and id).
Because message chunks inherit from their parent message class, an
AIMessageChunk
with tool call chunks will also include .tool_calls
and
.invalid_tool_calls
fields. These fields are parsed best-effort from
the message’s tool call chunks.
Note that not all providers currently support streaming for tool calls.
If this is the case for your specific provider, the model will yield a
single chunk with the entire call when you call .stream()
.
const stream = await llmWithTools.stream("What is 308 / 29");
for await (const chunk of stream) {
console.log(chunk.tool_call_chunks);
}
[
{
name: "calculator",
args: "",
id: "call_MzevUrdu5msUvISEP5TWGQYI",
index: 0
}
]
[ { name: undefined, args: '{"', id: undefined, index: 0 } ]
[ { name: undefined, args: "operation", id: undefined, index: 0 } ]
[ { name: undefined, args: '":"', id: undefined, index: 0 } ]
[ { name: undefined, args: "divide", id: undefined, index: 0 } ]
[ { name: undefined, args: '","', id: undefined, index: 0 } ]
[ { name: undefined, args: "number", id: undefined, index: 0 } ]
[ { name: undefined, args: "1", id: undefined, index: 0 } ]
[ { name: undefined, args: '":', id: undefined, index: 0 } ]
[ { name: undefined, args: "308", id: undefined, index: 0 } ]
[ { name: undefined, args: ',"', id: undefined, index: 0 } ]
[ { name: undefined, args: "number", id: undefined, index: 0 } ]
[ { name: undefined, args: "2", id: undefined, index: 0 } ]
[ { name: undefined, args: '":', id: undefined, index: 0 } ]
[ { name: undefined, args: "29", id: undefined, index: 0 } ]
[ { name: undefined, args: "}", id: undefined, index: 0 } ]
[]
Note that using the concat
method on message chunks will merge their
corresponding tool call chunks. This is the principle by which
LangChain’s various tool output
parsers support streaming.
For example, below we accumulate tool call chunks:
import { concat } from "@langchain/core/utils/stream";
const streamWithAccumulation = await llmWithTools.stream(
"What is 32993 - 2339"
);
let final;
for await (const chunk of streamWithAccumulation) {
if (!final) {
final = chunk;
} else {
final = concat(final, chunk);
}
}
console.log(final.tool_calls);
[
{
name: "calculator",
args: { operation: "subtract", number1: 32993, number2: 2339 },
id: "call_dDcRfLQ7L27c50eeSCaHEaIo"
}
]
Few shotting with tools
You can give the model examples of how you would like tools to be called
in order to guide generation by inputting manufactured tool call turns.
For example, given the above calculator tool, we could define a new
operator, 🦜
. Let’s see what happens when we use it naively:
const res = await llmWithTools.invoke("What is 3 🦜 12");
console.log(res.content);
console.log(res.tool_calls);
[
{
name: "calculator",
args: { operation: "multiply", number1: 3, number2: 12 },
id: "call_pVPqABsVEJCpLQRSOv8h3N0I"
}
]
It doesn’t quite know how to interpret 🦜
as an operation. Now, let’s
try giving it an example in the form of a manufactured messages to steer
it towards divide
:
import { HumanMessage, AIMessage, ToolMessage } from "@langchain/core/messages";
const res = await llmWithTools.invoke([
new HumanMessage("What is 333382 🦜 1932?"),
new AIMessage({
content:
"The 🦜 operator is shorthand for division, so we call the divide tool.",
tool_calls: [
{
id: "12345",
name: "calulator",
args: {
number1: 333382,
number2: 1932,
operation: "divide",
},
},
],
}),
new ToolMessage({
tool_call_id: "12345",
content: "The answer is 172.558.",
}),
new AIMessage("The answer is 172.558."),
new HumanMessage("What is 3 🦜 12"),
]);
console.log(res.tool_calls);
[
{
name: "calculator",
args: { operation: "divide", number1: 3, number2: 12 },
id: "call_fSqOSwyJYTKpH1Y7x63JBLik"
}
]
Binding model-specific formats (advanced)
Providers adopt different conventions for formatting tool schemas. For instance, OpenAI uses a format like this:
type
: The type of the tool. At the time of writing, this is always “function”.function
: An object containing tool parameters.function.name
: The name of the schema to output.function.description
: A high level description of the schema to output.function.parameters
: The nested details of the schema you want to extract, formatted as a JSON schema object.
We can bind this model-specific format directly to the model if needed. Here’s an example:
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({ model: "gpt-4o" });
const modelWithTools = model.bind({
tools: [
{
type: "function",
function: {
name: "calculator",
description: "Can perform mathematical operations.",
parameters: {
type: "object",
properties: {
operation: {
type: "string",
description: "The type of operation to execute.",
enum: ["add", "subtract", "multiply", "divide"],
},
number1: { type: "number", description: "First integer" },
number2: { type: "number", description: "Second integer" },
},
required: ["number1", "number2"],
},
},
},
],
});
await modelWithTools.invoke(`Whats 119 times 8?`);
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "",
tool_calls: [
{
name: "calculator",
args: { operation: "multiply", number1: 119, number2: 8 },
id: "call_OP8F1LP7B3hwPEc2TzGBOYKP"
}
],
invalid_tool_calls: [],
additional_kwargs: {
function_call: undefined,
tool_calls: [
{
id: "call_OP8F1LP7B3hwPEc2TzGBOYKP",
type: "function",
function: [Object]
}
]
},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "",
name: undefined,
additional_kwargs: {
function_call: undefined,
tool_calls: [
{
id: "call_OP8F1LP7B3hwPEc2TzGBOYKP",
type: "function",
function: {
name: "calculator",
arguments: '{"operation":"multiply","number1":119,"number2":8}'
}
}
]
},
response_metadata: {
tokenUsage: { completionTokens: 24, promptTokens: 85, totalTokens: 109 },
finish_reason: "tool_calls"
},
tool_calls: [
{
name: "calculator",
args: { operation: "multiply", number1: 119, number2: 8 },
id: "call_OP8F1LP7B3hwPEc2TzGBOYKP"
}
],
invalid_tool_calls: [],
usage_metadata: { input_tokens: 85, output_tokens: 24, total_tokens: 109 }
}
This is functionally equivalent to the bind_tools()
calls above.
Next steps
Now you’ve learned how to bind tool schemas to a chat model and to call those tools. Next, check out some more specific uses of tool calling: