Skip to main content

Ollama 函数

¥Ollama Functions

danger

LangChain Ollama 集成包已正式支持工具调用。点击此处查看文档

¥The LangChain Ollama integration package has official support for tool calling. Click here to view the documentation.

LangChain 提供了一个实验性的封装器,用于封装通过 Ollama 在本地运行的开源模型,使其拥有与 OpenAI Functions 相同的 API。

¥LangChain offers an experimental wrapper around open source models run locally via Ollama that gives it the same API as OpenAI Functions.

请注意,功能更强大、性能更佳的模型在处理复杂模式和/或多个函数时会表现更佳。以下示例使用 Mistral

¥Note that more powerful and capable models will perform better with complex schema and/or multiple functions. The examples below use Mistral.

danger

这是一个实验性的封装器,旨在将工具调用支持附加到原生不支持该支持的模型上。谨慎使用。

¥This is an experimental wrapper that attempts to bolt-on tool calling support to models that do not natively support it. Use with caution.

设置

¥Setup

按照 这些说明 的步骤设置并运行本地 Ollama 实例。

¥Follow these instructions to set up and run a local Ollama instance.

初始化模型

¥Initialize model

你可以像初始化标准 ChatOllama 实例一样初始化此封装器:

¥You can initialize this wrapper the same way you'd initialize a standard ChatOllama instance:

import { OllamaFunctions } from "@langchain/community/experimental/chat_models/ollama_functions";

const model = new OllamaFunctions({
temperature: 0.1,
model: "mistral",
});

函数传递

¥Passing in functions

现在你可以像 OpenAI 一样传入函数:

¥You can now pass in functions the same way as OpenAI:

import { ChatOllama } from "@langchain/ollama";
import { HumanMessage } from "@langchain/core/messages";

const model = new ChatOllama({
temperature: 0.1,
model: "mistral",
})
.bindTools([
{
name: "get_current_weather",
description: "Get the current weather in a given location",
parameters: {
type: "object",
properties: {
location: {
type: "string",
description: "The city and state, e.g. San Francisco, CA",
},
unit: { type: "string", enum: ["celsius", "fahrenheit"] },
},
required: ["location"],
},
},
])
.withConfig({
// You can set the `tool_choice` arg to force the model to use a function
tool_choice: "get_current_weather",
});

const response = await model.invoke([
new HumanMessage({
content: "What's the weather in Boston?",
}),
]);

console.log(response);

/*
AIMessage {
content: '',
additional_kwargs: {
function_call: {
name: 'get_current_weather',
arguments: '{"location":"Boston, MA","unit":"fahrenheit"}'
}
}
}
*/

API Reference:

用于提取

¥Using for extraction

import { z } from "zod";

import { ChatOllama } from "@langchain/ollama";
import { PromptTemplate } from "@langchain/core/prompts";
import { JsonOutputFunctionsParser } from "@langchain/core/output_parsers/openai_functions";

const EXTRACTION_TEMPLATE = `Extract and save the relevant entities mentioned in the following passage together with their properties.

Passage:
{input}
`;

const prompt = PromptTemplate.fromTemplate(EXTRACTION_TEMPLATE);

// Use Zod for easier schema declaration
const schema = z.object({
people: z.array(
z.object({
name: z.string().describe("The name of a person"),
height: z.number().describe("The person's height"),
hairColor: z.optional(z.string()).describe("The person's hair color"),
})
),
});

const model = new ChatOllama({
temperature: 0.1,
model: "mistral",
})
.bindTools([
{
name: "information_extraction",
description: "Extracts the relevant information from the passage.",
schema,
},
])
.withConfig({
tool_choice: "information_extraction",
});

// Use a JsonOutputFunctionsParser to get the parsed JSON response directly.
const chain = prompt.pipe(model).pipe(new JsonOutputFunctionsParser());

const response = await chain.invoke({
input:
"Alex is 5 feet tall. Claudia is 1 foot taller than Alex and jumps higher than him. Claudia has orange hair and Alex is blonde.",
});

console.log(JSON.stringify(response, null, 2));

/*
{
"people": [
{
"name": "Alex",
"height": 5,
"hairColor": "blonde"
},
{
"name": "Claudia",
"height": {
"$num": 1,
"add": [
{
"name": "Alex",
"prop": "height"
}
]
},
"hairColor": "orange"
}
]
}
*/

API Reference:

tip

你可以查看此 此处 的简单 LangSmith 跟踪

¥You can see a simple LangSmith trace of this here

自定义

¥Customization

在后台,此操作使用 Ollama 的 JSON 模式将输出限制为 JSON,然后将工具架构作为 JSON 架构传递到提示符中。

¥Behind the scenes, this uses Ollama's JSON mode to constrain output to JSON, then passes tools schemas as JSON schema into the prompt.

由于不同模型具有不同的优势,因此传入你自己的系统提示可能会有所帮助。以下是一个例子:

¥Because different models have different strengths, it may be helpful to pass in your own system prompt. Here's an example:

import { ChatOllama } from "@langchain/ollama";
import { HumanMessage, SystemMessage } from "@langchain/core/messages";

// Custom system prompt to format tools. You must encourage the model
// to wrap output in a JSON object with "tool" and "tool_input" properties.
const toolSystemPromptTemplate = `You have access to the following tools:

{tools}

To use a tool, respond with a JSON object with the following structure:
{{
"tool": <name of the called tool>,
"tool_input": <parameters for the tool matching the above JSON schema>
}}`;

const model = new ChatOllama({
temperature: 0.1,
model: "mistral",
})
.bindTools([
{
name: "get_current_weather",
description: "Get the current weather in a given location",
parameters: {
type: "object",
properties: {
location: {
type: "string",
description: "The city and state, e.g. San Francisco, CA",
},
unit: { type: "string", enum: ["celsius", "fahrenheit"] },
},
required: ["location"],
},
},
])
.withConfig({
// You can set the `tool_choice` arg to force the model to use a function
tool_choice: "get_current_weather",
});

const response = await model.invoke([
new SystemMessage(toolSystemPromptTemplate),
new HumanMessage({
content: "What's the weather in Boston?",
}),
]);

console.log(response);

/*
AIMessage {
content: '',
additional_kwargs: {
function_call: {
name: 'get_current_weather',
arguments: '{"location":"Boston, MA","unit":"fahrenheit"}'
}
}
}
*/

API Reference:

¥Related