如何将结构化输出流式传输到客户端
¥How to stream structured output to the client
本指南将指导你如何使用此目录中的 React 服务器组件 将代理数据传输到客户端。本文档中的代码取自此目录中的 page.tsx
和 action.ts
文件。要查看完整、不间断的代码,请点击 操作文件请点击此处 和 客户端文件请点击此处。
¥This guide will walk you through how we stream agent data to the client using React Server Components inside this directory.
The code in this doc is taken from the page.tsx
and action.ts
files in this directory. To view the full, uninterrupted code, click here for the actions file
and here for the client file.
本指南假设你熟悉以下概念:
¥This guide assumes familiarity with the following concepts:
设置
¥Setup
首先,安装必要的 LangChain 和 AI SDK 包:
¥First, install the necessary LangChain & AI SDK packages:
- npm
- Yarn
- pnpm
npm install @langchain/openai @langchain/core ai zod zod-to-json-schema
yarn add @langchain/openai @langchain/core ai zod zod-to-json-schema
pnpm add @langchain/openai @langchain/core ai zod zod-to-json-schema
接下来,我们将创建服务器文件。这将包含进行工具调用和将数据发送回客户端的所有逻辑。
¥Next, we'll create our server file. This will contain all the logic for making tool calls and sending the data back to the client.
首先添加必要的导入和 "use server"
指令:
¥Start by adding the necessary imports & the "use server"
directive:
"use server";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { createStreamableValue } from "ai/rsc";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
import { JsonOutputKeyToolsParser } from "@langchain/core/output_parsers/openai_tools";
之后,我们将定义工具模式。在本例中,我们将使用一个简单的天气演示模式:
¥After that, we'll define our tool schema. For this example we'll use a simple demo weather schema:
const Weather = z
.object({
city: z.string().describe("City to search for weather"),
state: z.string().describe("State abbreviation to search for weather"),
})
.describe("Weather search parameters");
定义好模式后,我们就可以实现 executeTool
函数了。此函数接受 string
的单个输入,并包含工具的所有逻辑以及将数据流返回到客户端:
¥Once our schema is defined, we can implement our executeTool
function.
This function takes in a single input of string
, and contains all the logic for our tool and streaming data back to the client:
export async function executeTool(
input: string,
) {
"use server";
const stream = createStreamableValue();
createStreamableValue
函数非常重要,因为我们将使用它将所有数据实际流式传输回客户端。
¥The createStreamableValue
function is important as this is what we'll use for actually streaming all the data back to the client.
对于主要逻辑,我们将其封装在一个异步函数中。首先定义我们的提示和聊天模型:
¥For the main logic, we'll wrap it in an async function. Start by defining our prompt and chat model:
(async () => {
const prompt = ChatPromptTemplate.fromMessages([
[
"system",
`You are a helpful assistant. Use the tools provided to best assist the user.`,
],
["human", "{input}"],
]);
const llm = new ChatOpenAI({
model: "gpt-4o-2024-05-13",
temperature: 0,
});
定义聊天模型后,我们将使用 LCEL 定义可运行链。
¥After defining our chat model, we'll define our runnable chain using LCEL.
我们开始将之前定义的 weather
工具绑定到模型:
¥We start binding our weather
tool we defined earlier to the model:
const modelWithTools = llm.bind({
tools: [
{
type: "function" as const,
function: {
name: "get_weather",
description: Weather.description,
parameters: zodToJsonSchema(Weather),
},
},
],
});
接下来,我们将使用 LCEL 将每个组件连接在一起,从提示符开始,然后是带有工具的模型,最后是输出解析器:
¥Next, we'll use LCEL to pipe each component together, starting with the prompt, then the model with tools, and finally the output parser:
const chain = prompt.pipe(modelWithTools).pipe(
new JsonOutputKeyToolsParser<z.infer<typeof Weather>>({
keyName: "get_weather",
zodSchema: Weather,
})
);
最后,我们将在链上调用 .stream
,与 流式代理 示例类似,我们将遍历流并对数据进行字符串化和解析,然后再更新流值:
¥Finally, we'll call .stream
on our chain, and similarly to the streaming agent
example, we'll iterate over the stream and stringify + parse the data before updating the stream value:
const streamResult = await chain.stream({
input,
});
for await (const item of streamResult) {
stream.update(JSON.parse(JSON.stringify(item, null, 2)));
}
stream.done();
})();
return { streamData: stream.value };
}