How to pass tool outputs to chat models
This guide assumes familiarity with the following concepts:
If weโre using the model-generated tool invocations to actually call
tools and want to pass the tool results back to the model, we can do so
using ToolMessages and ToolCalls. First, letโs define some tools and
a chat model instance.
import { z } from "zod";
import { tool } from "@langchain/core/tools";
const addTool = tool(
  async ({ a, b }) => {
    return a + b;
  },
  {
    name: "add",
    schema: z.object({
      a: z.number(),
      b: z.number(),
    }),
    description: "Adds a and b.",
  }
);
const multiplyTool = tool(
  async ({ a, b }) => {
    return a * b;
  },
  {
    name: "multiply",
    schema: z.object({
      a: z.number(),
      b: z.number(),
    }),
    description: "Multiplies a and b.",
  }
);
const tools = [addTool, multiplyTool];
Pick your chat model:
- OpenAI
- Anthropic
- FireworksAI
- MistralAI
- Groq
- VertexAI
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/openai 
yarn add @langchain/openai 
pnpm add @langchain/openai 
Add environment variables
OPENAI_API_KEY=your-api-key
Instantiate the model
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
  model: "gpt-3.5-turbo",
  temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/anthropic 
yarn add @langchain/anthropic 
pnpm add @langchain/anthropic 
Add environment variables
ANTHROPIC_API_KEY=your-api-key
Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";
const llm = new ChatAnthropic({
  model: "claude-3-5-sonnet-20240620",
  temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/community 
yarn add @langchain/community 
pnpm add @langchain/community 
Add environment variables
FIREWORKS_API_KEY=your-api-key
Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";
const llm = new ChatFireworks({
  model: "accounts/fireworks/models/firefunction-v1",
  temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/mistralai 
yarn add @langchain/mistralai 
pnpm add @langchain/mistralai 
Add environment variables
MISTRAL_API_KEY=your-api-key
Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";
const llm = new ChatMistralAI({
  model: "mistral-large-latest",
  temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/groq 
yarn add @langchain/groq 
pnpm add @langchain/groq 
Add environment variables
GROQ_API_KEY=your-api-key
Instantiate the model
import { ChatGroq } from "@langchain/groq";
const llm = new ChatGroq({
  model: "mixtral-8x7b-32768",
  temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/google-vertexai 
yarn add @langchain/google-vertexai 
pnpm add @langchain/google-vertexai 
Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";
const llm = new ChatVertexAI({
  model: "gemini-1.5-pro",
  temperature: 0
});
If we invoke a tool with a ToolCall, weโll automatically get back a
ToolMessage that can be fed back to the model:
This functionality requires @langchain/core>=0.2.16. Please see here for a guide on upgrading.
import { HumanMessage } from "@langchain/core/messages";
const messages = [new HumanMessage("What is 3 * 12? Also, what is 11 + 49?")];
const aiMessage = await llmWithTools.invoke(messages);
messages.push(aiMessage);
const toolsByName = {
  add: addTool,
  multiply: multiplyTool,
};
for (const toolCall of aiMessage.tool_calls) {
  const selectedTool = toolsByName[toolCall.name];
  const toolMessage = await selectedTool.invoke(toolCall);
  messages.push(toolMessage);
}
console.log(messages);
[
  HumanMessage {
    lc_serializable: true,
    lc_kwargs: {
      content: 'What is 3 * 12? Also, what is 11 + 49?',
      additional_kwargs: {},
      response_metadata: {}
    },
    lc_namespace: [ 'langchain_core', 'messages' ],
    content: 'What is 3 * 12? Also, what is 11 + 49?',
    name: undefined,
    additional_kwargs: {},
    response_metadata: {},
    id: undefined
  },
  AIMessage {
    lc_serializable: true,
    lc_kwargs: {
      content: '',
      tool_calls: [Array],
      invalid_tool_calls: [],
      additional_kwargs: [Object],
      id: 'chatcmpl-9llAzVKdHCJkcUCnwGx62bqesSJPB',
      response_metadata: {}
    },
    lc_namespace: [ 'langchain_core', 'messages' ],
    content: '',
    name: undefined,
    additional_kwargs: { function_call: undefined, tool_calls: [Array] },
    response_metadata: { tokenUsage: [Object], finish_reason: 'tool_calls' },
    id: 'chatcmpl-9llAzVKdHCJkcUCnwGx62bqesSJPB',
    tool_calls: [ [Object], [Object] ],
    invalid_tool_calls: [],
    usage_metadata: { input_tokens: 87, output_tokens: 50, total_tokens: 137 }
  },
  ToolMessage {
    lc_serializable: true,
    lc_kwargs: {
      content: '36',
      artifact: undefined,
      tool_call_id: 'call_7P5ZjvqWc7jrXjWDkhZ6MU4b',
      name: 'multiply',
      additional_kwargs: {},
      response_metadata: {}
    },
    lc_namespace: [ 'langchain_core', 'messages' ],
    content: '36',
    name: 'multiply',
    additional_kwargs: {},
    response_metadata: {},
    id: undefined,
    tool_call_id: 'call_7P5ZjvqWc7jrXjWDkhZ6MU4b',
    artifact: undefined
  },
  ToolMessage {
    lc_serializable: true,
    lc_kwargs: {
      content: '60',
      artifact: undefined,
      tool_call_id: 'call_jbyowegkI0coHbnnHs7HLELC',
      name: 'add',
      additional_kwargs: {},
      response_metadata: {}
    },
    lc_namespace: [ 'langchain_core', 'messages' ],
    content: '60',
    name: 'add',
    additional_kwargs: {},
    response_metadata: {},
    id: undefined,
    tool_call_id: 'call_jbyowegkI0coHbnnHs7HLELC',
    artifact: undefined
  }
]
await llmWithTools.invoke(messages);
AIMessage {
  lc_serializable: true,
  lc_kwargs: {
    content: '3 * 12 is 36, and 11 + 49 is 60.',
    tool_calls: [],
    invalid_tool_calls: [],
    additional_kwargs: { function_call: undefined, tool_calls: undefined },
    id: 'chatcmpl-9llB0VVQNdufqhJHHtY9yCPeQeKLZ',
    response_metadata: {}
  },
  lc_namespace: [ 'langchain_core', 'messages' ],
  content: '3 * 12 is 36, and 11 + 49 is 60.',
  name: undefined,
  additional_kwargs: { function_call: undefined, tool_calls: undefined },
  response_metadata: {
    tokenUsage: { completionTokens: 19, promptTokens: 153, totalTokens: 172 },
    finish_reason: 'stop'
  },
  id: 'chatcmpl-9llB0VVQNdufqhJHHtY9yCPeQeKLZ',
  tool_calls: [],
  invalid_tool_calls: [],
  usage_metadata: { input_tokens: 153, output_tokens: 19, total_tokens: 172 }
}
Note that we pass back the same tool_call_id in the ToolMessage as
what we receive from the model in order to help the model match tool
responses with tool calls.
Relatedโ
Youโve now seen how to pass tool calls back to a model.
These guides may interest you next: