如何在一行中初始化任何模型
¥How to init any model in one line
许多 LLM 应用允许终端用户指定他们希望应用由哪个模型提供商和模型驱动。这需要编写一些逻辑来根据某些用户配置初始化不同的聊天模型。initChatModel()
辅助方法可以轻松初始化多个不同的模型集成,而无需担心导入路径和类名。请注意,此功能仅适用于聊天模型。
¥Many LLM applications let end users specify what model provider and model they want the application to be powered by.
This requires writing some logic to initialize different ChatModels based on some user configuration.
The initChatModel()
helper method makes it easy to initialize a number of different model integrations without having to worry about import paths and class names.
Keep in mind this feature is only for chat models.
本指南假设你熟悉以下概念:
¥This guide assumes familiarity with the following concepts:
此功能仅适用于 Node 环境。在非 Node 环境或与打包工具一起使用时,无法保证其正常工作,且不受官方支持。
¥This feature is only intended to be used in Node environments. Use in non Node environments or with bundlers is not guaranteed to work and not officially supported.
initChatModel
需要 langchain>=0.2.11
。有关升级时需要考虑的一些事项,请参阅 此指南。
¥initChatModel
requires langchain>=0.2.11
. See this guide for some considerations to take when upgrading.
请参阅 initChatModel() API 参考,获取受支持集成的完整列表。
¥See the initChatModel() API reference for a full list of supported integrations.
确保你已安装要支持的所有模型提供程序的集成包。例如:你应该安装 @langchain/openai
才能初始化 OpenAI 模型。
¥Make sure you have the integration packages installed for any model providers you want to support. E.g. you should have @langchain/openai
installed to init an OpenAI model.
基本用法
¥Basic usage
import { initChatModel } from "langchain/chat_models/universal";
// Returns a @langchain/openai ChatOpenAI instance.
const gpt4o = await initChatModel("gpt-4o", {
modelProvider: "openai",
temperature: 0,
});
// You can also specify the model provider in the model name like this in
// langchain>=0.3.18:
// Returns a @langchain/anthropic ChatAnthropic instance.
const claudeOpus = await initChatModel("anthropic:claude-3-opus-20240229", {
temperature: 0,
});
// Returns a @langchain/google-vertexai ChatVertexAI instance.
const gemini15 = await initChatModel("google-vertexai:gemini-1.5-pro", {
temperature: 0,
});
// Since all model integrations implement the ChatModel interface, you can use them in the same way.
console.log(`GPT-4o: ${(await gpt4o.invoke("what's your name")).content}\n`);
console.log(
`Claude Opus: ${(await claudeOpus.invoke("what's your name")).content}\n`
);
console.log(
`Gemini 1.5: ${(await gemini15.invoke("what's your name")).content}\n`
);
/*
GPT-4o: I'm an AI language model created by OpenAI, and I don't have a personal name. You can call me Assistant or any other name you prefer! How can I help you today?
Claude Opus: My name is Claude. It's nice to meet you!
Gemini 1.5: I don't have a name. I am a large language model, and I am not a person. I am a computer program that can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.
*/
API Reference:
- initChatModel from
langchain/chat_models/universal
推断模型提供者
¥Inferring model provider
对于常见且不同的模型名称,initChatModel()
将尝试推断模型提供者。请参阅 API 参考,获取推断行为的完整列表。例如:任何以 gpt-3...
或 gpt-4...
开头的模型都将被推断为使用模型提供程序 openai
。
¥For common and distinct model names initChatModel()
will attempt to infer the model provider.
See the API reference for a full list of inference behavior.
E.g. any model that starts with gpt-3...
or gpt-4...
will be inferred as using model provider openai
.
import { initChatModel } from "langchain/chat_models/universal";
const gpt4o = await initChatModel("gpt-4o", {
temperature: 0,
});
const claudeOpus = await initChatModel("claude-3-opus-20240229", {
temperature: 0,
});
const gemini15 = await initChatModel("gemini-1.5-pro", {
temperature: 0,
});
API Reference:
- initChatModel from
langchain/chat_models/universal
创建可配置模型
¥Creating a configurable model
你还可以通过指定 configurableFields
来创建可在运行时配置的模型。如果你未指定 model
值,则默认情况下 "model" 和 "modelProvider" 可配置。
¥You can also create a runtime-configurable model by specifying configurableFields
.
If you don't specify a model
value, then "model" and "modelProvider" be configurable by default.
import { initChatModel } from "langchain/chat_models/universal";
const configurableModel = await initChatModel(undefined, { temperature: 0 });
const gpt4Res = await configurableModel.invoke("what's your name", {
configurable: { model: "gpt-4o" },
});
console.log("gpt4Res: ", gpt4Res.content);
/*
gpt4Res: I'm an AI language model created by OpenAI, and I don't have a personal name. You can call me Assistant or any other name you prefer! How can I assist you today?
*/
const claudeRes = await configurableModel.invoke("what's your name", {
configurable: { model: "claude-3-5-sonnet-20240620" },
});
console.log("claudeRes: ", claudeRes.content);
/*
claudeRes: My name is Claude. It's nice to meet you!
*/
API Reference:
- initChatModel from
langchain/chat_models/universal
具有默认值的可配置模型
¥Configurable model with default values
我们可以使用默认模型值创建可配置模型,指定哪些参数是可配置的,并为可配置参数添加前缀:
¥We can create a configurable model with default model values, specify which parameters are configurable, and add prefixes to configurable params:
import { initChatModel } from "langchain/chat_models/universal";
const firstLlm = await initChatModel("gpt-4o", {
temperature: 0,
configurableFields: ["model", "modelProvider", "temperature", "maxTokens"],
configPrefix: "first", // useful when you have a chain with multiple models
});
const openaiRes = await firstLlm.invoke("what's your name");
console.log("openaiRes: ", openaiRes.content);
/*
openaiRes: I'm an AI language model created by OpenAI, and I don't have a personal name. You can call me Assistant or any other name you prefer! How can I assist you today?
*/
const claudeRes = await firstLlm.invoke("what's your name", {
configurable: {
first_model: "claude-3-5-sonnet-20240620",
first_temperature: 0.5,
first_maxTokens: 100,
},
});
console.log("claudeRes: ", claudeRes.content);
/*
claudeRes: My name is Claude. It's nice to meet you!
*/
API Reference:
- initChatModel from
langchain/chat_models/universal
以声明方式使用可配置模型
¥Using a configurable model declaratively
我们可以在可配置模型上调用声明式操作,例如 bindTools
、withStructuredOutput
、withConfig
等,并以与常规实例化聊天模型对象相同的方式链接可配置模型。
¥We can call declarative operations like bindTools
, withStructuredOutput
, withConfig
, etc. on a configurable model and chain a configurable model in the same way that we would a regularly instantiated chat model object.
import { z } from "zod";
import { tool } from "@langchain/core/tools";
import { initChatModel } from "langchain/chat_models/universal";
const GetWeather = z
.object({
location: z.string().describe("The city and state, e.g. San Francisco, CA"),
})
.describe("Get the current weather in a given location");
const weatherTool = tool(
(_) => {
// do something
return "138 degrees";
},
{
name: "GetWeather",
schema: GetWeather,
}
);
const GetPopulation = z
.object({
location: z.string().describe("The city and state, e.g. San Francisco, CA"),
})
.describe("Get the current population in a given location");
const populationTool = tool(
(_) => {
// do something
return "one hundred billion";
},
{
name: "GetPopulation",
schema: GetPopulation,
}
);
const llm = await initChatModel(undefined, { temperature: 0 });
const llmWithTools = llm.bindTools([weatherTool, populationTool]);
const toolCalls1 = (
await llmWithTools.invoke("what's bigger in 2024 LA or NYC", {
configurable: { model: "gpt-4o" },
})
).tool_calls;
console.log("toolCalls1: ", JSON.stringify(toolCalls1, null, 2));
/*
toolCalls1: [
{
"name": "GetPopulation",
"args": {
"location": "Los Angeles, CA"
},
"type": "tool_call",
"id": "call_DXRBVE4xfLYZfhZOsW1qRbr5"
},
{
"name": "GetPopulation",
"args": {
"location": "New York, NY"
},
"type": "tool_call",
"id": "call_6ec3m4eWhwGz97sCbNt7kOvC"
}
]
*/
const toolCalls2 = (
await llmWithTools.invoke("what's bigger in 2024 LA or NYC", {
configurable: { model: "claude-3-5-sonnet-20240620" },
})
).tool_calls;
console.log("toolCalls2: ", JSON.stringify(toolCalls2, null, 2));
/*
toolCalls2: [
{
"name": "GetPopulation",
"args": {
"location": "Los Angeles, CA"
},
"id": "toolu_01K3jNU8jx18sJ9Y6Q9SooJ7",
"type": "tool_call"
},
{
"name": "GetPopulation",
"args": {
"location": "New York City, NY"
},
"id": "toolu_01UiANKaSwYykuF4hi3t5oNB",
"type": "tool_call"
}
]
*/
API Reference:
- tool from
@langchain/core/tools
- initChatModel from
langchain/chat_models/universal