How to init any model in one line
Many LLM applications let end users specify what model provider and
model they want the application to be powered by. This requires writing
some logic to initialize different ChatModels based on some user
configuration. The initChatModel()
helper method makes it easy to
initialize a number of different model integrations without having to
worry about import paths and class names.
See the initChatModel() API reference for a full list of supported integrations.
Make sure you have the integration packages installed for any model providers you want to support. E.g. you should have @langchain/openai
installed to init an OpenAI model.
langchain >= 0.2.11
This functionality was added in langchain v0.2.11
. Please make sure your package is up to date.
Basic usageβ
import { initChatModel } from "langchain/chat_models";
// Returns a @langchain/openai ChatOpenAI instance.
const gpt4o = await initChatModel("gpt-4o", {
modelProvider: "openai",
temperature: 0,
});
// Returns a @langchain/anthropic ChatAnthropic instance.
const claudeOpus = await initChatModel("claude-3-opus-20240229", {
modelProvider: "anthropic",
temperature: 0,
});
// Returns a @langchain/google-vertexai ChatVertexAI instance.
const gemini15 = await initChatModel("gemini-1.5-pro", {
modelProvider: "google_vertexai",
temperature: 0,
});
// Since all model integrations implement the ChatModel interface, you can use them in the same way.
console.log(
"GPT-4o: " + (await gpt4o.invoke("what's your name")).content + "\n"
);
console.log(
"Claude Opus: " + (await claudeOpus.invoke("what's your name")).content + "\n"
);
console.log(
"Gemini 1.5: " + (await gemini15.invoke("what's your name")).content + "\n"
);
GPT-4o: I'm an AI created by OpenAI, and I don't have a personal name. You can call me Assistant! How can I help you today?
Claude Opus: My name is Claude. It's nice to meet you!
Gemini 1.5: I am a large language model, trained by Google. I do not have a name.
Inferring model providerβ
For common and distinct model names initChatModel()
will attempt to
infer the model provider. See the API
reference
for a full list of inference behavior. E.g. any model that starts with
gpt-3...
or gpt-4...
will be inferred as using model provider
openai
.
const gpt4o = await initChatModel("gpt-4o", {
temperature: 0,
});
const claudeOpus = await initChatModel("claude-3-opus-20240229", {
temperature: 0,
});
const gemini15 = await initChatModel("gemini-1.5-pro", {
temperature: 0,
});
Creating a configurable modelβ
You can also create a runtime-configurable model by specifying
configurableFields
. If you donβt specify a model
value, then βmodelβ
and βmodelProviderβ be configurable by default.
const configurableModel = await initChatModel({ temperature: 0 });
await configurableModel.invoke("what's your name", {
configurable: { model: "gpt-4o" },
});
AIMessage(content="I'm an AI language model created by OpenAI, and I don't have a personal name. You can call me Assistant or any other name you prefer! How can I assist you today?", response_metadata={'token_usage': {'completion_tokens': 37, 'prompt_tokens': 11, 'total_tokens': 48}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_d576307f90', 'finish_reason': 'stop', 'logprobs': None}, id='run-5428ab5c-b5c0-46de-9946-5d4ca40dbdc8-0', usage_metadata={'input_tokens': 11, 'output_tokens': 37, 'total_tokens': 48})
await configurableModel.invoke("what's your name", {
configurable: { model: "claude-3-5-sonnet-20240620" },
});
AIMessage(content="My name is Claude. It's nice to meet you!", response_metadata={'id': 'msg_012XvotUJ3kGLXJUWKBVxJUi', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 11, 'output_tokens': 15}}, id='run-1ad1eefe-f1c6-4244-8bc6-90e2cb7ee554-0', usage_metadata={'input_tokens': 11, 'output_tokens': 15, 'total_tokens': 26})
Configurable model with default valuesβ
We can create a configurable model with default model values, specify which parameters are configurable, and add prefixes to configurable params:
const firstLlm = await initChatModel("gpt-4o", {
temperature: 0,
configurableFields: ["model", "modelProvider", "temperature", "maxTokens"],
configPrefix: "first", // useful when you have a chain with multiple models
});
await firstLlm.invoke("what's your name");
AIMessage(content="I'm an AI language model created by OpenAI, and I don't have a personal name. You can call me Assistant or any other name you prefer! How can I assist you today?", response_metadata={'token_usage': {'completion_tokens': 37, 'prompt_tokens': 11, 'total_tokens': 48}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_ce0793330f', 'finish_reason': 'stop', 'logprobs': None}, id='run-3923e328-7715-4cd6-b215-98e4b6bf7c9d-0', usage_metadata={'input_tokens': 11, 'output_tokens': 37, 'total_tokens': 48})
await firstLlm.invoke("what's your name", {
configurable: {
first_model: "claude-3-5-sonnet-20240620",
first_temperature: 0.5,
first_maxTokens: 100,
},
});
AIMessage(content="My name is Claude. It's nice to meet you!", response_metadata={'id': 'msg_01RyYR64DoMPNCfHeNnroMXm', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 11, 'output_tokens': 15}}, id='run-22446159-3723-43e6-88df-b84797e7751d-0', usage_metadata={'input_tokens': 11, 'output_tokens': 15, 'total_tokens': 26})
Using a configurable model declarativelyβ
We can call declarative operations like bind_tools
,
with_structured_output
, with_configurable
, etc. on a configurable
model and chain a configurable model in the same way that we would a
regularly instantiated chat model object.
import { z } from "zod";
import { tool } from "@langchain/core/tools";
const GetWeather = z
.object({
location: z.string().describe("The city and state, e.g. San Francisco, CA"),
})
.describe("Get the current weather in a given location");
const weatherTool = tool(
(input) => {
// do something
return "138 degrees";
},
{
name: "GetWeather",
schema: GetWeather,
}
);
const GetPopulation = z
.object({
location: z.string().describe("The city and state, e.g. San Francisco, CA"),
})
.describe("Get the current population in a given location");
const populationTool = tool(
(input) => {
// do something
return "one hundred billion";
},
{
name: "GetPopulation",
schema: GetPopulation,
}
);
const llm = await initChatModel({ temperature: 0 });
const llmWithTools = llm.bindTools([weatherTool, populationTool])(
await llmWithTools.invoke("what's bigger in 2024 LA or NYC", {
configurable: { model: "gpt-4o" },
})
).tool_calls;
[{'name': 'GetPopulation',
'args': {'location': 'Los Angeles, CA'},
'id': 'call_sYT3PFMufHGWJD32Hi2CTNUP'},
{'name': 'GetPopulation',
'args': {'location': 'New York, NY'},
'id': 'call_j1qjhxRnD3ffQmRyqjlI1Lnk'}]
(
await llmWithTools.invoke("what's bigger in 2024 LA or NYC", {
configurable: { model: "claude-3-5-sonnet-20240620" },
})
).tool_calls;
[{'name': 'GetPopulation',
'args': {'location': 'Los Angeles, CA'},
'id': 'toolu_01CxEHxKtVbLBrvzFS7GQ5xR'},
{'name': 'GetPopulation',
'args': {'location': 'New York City, NY'},
'id': 'toolu_013A79qt5toWSsKunFBDZd5S'}]