Skip to main content

如何将代理数据流式传输到客户端

¥How to stream agent data to the client

本指南将指导你如何使用此目录中的 React 服务器组件 将代理数据传输到客户端。本文档中的代码取自此目录中的 page.tsxaction.ts 文件。要查看完整、不间断的代码,请点击 操作文件请点击此处客户端文件请点击此处

¥This guide will walk you through how we stream agent data to the client using React Server Components inside this directory. The code in this doc is taken from the page.tsx and action.ts files in this directory. To view the full, uninterrupted code, click here for the actions file and here for the client file.

Prerequisites

本指南假设你熟悉以下概念:

¥This guide assumes familiarity with the following concepts:

设置

¥Setup

首先,安装必要的 LangChain 和 AI SDK 包:

¥First, install the necessary LangChain & AI SDK packages:

npm install langchain @langchain/core @langchain/community ai

在本演示中,我们将使用 TavilySearchResults 工具,该工具需要 API 密钥。你可以获取一个 此处,也可以将其替换为你选择的其他工具,例如不需要 API 密钥的 WikipediaQueryRun

¥In this demo we'll be using the TavilySearchResults tool, which requires an API key. You can get one here, or you can swap it out for another tool of your choice, like WikipediaQueryRun which doesn't require an API key.

如果你选择使用 TavilySearchResults,请按如下方式设置你的 API 密钥:

¥If you choose to use TavilySearchResults, set your API key like so:

export TAVILY_API_KEY=your_api_key

开始使用

¥Get started

第一步是创建一个新的 RSC 文件,并添加我们将用于运行代理的导入。在本演示中,我们将其命名为 action.ts

¥The first step is to create a new RSC file, and add the imports which we'll use for running our agent. In this demo, we'll name it action.ts:

"use server";

import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";
import { AgentExecutor, createToolCallingAgent } from "langchain/agents";
import { pull } from "langchain/hub";
import { createStreamableValue } from "ai/rsc";

接下来,我们将定义一个 runAgent 函数。此函数接受 string 的单个输入,并包含代理的所有逻辑以及将数据流返回到客户端:

¥Next, we'll define a runAgent function. This function takes in a single input of string, and contains all the logic for our agent and streaming data back to the client:

export async function runAgent(input: string) {
"use server";
}

接下来,在我们的函数内部,我们将定义我们选择的聊天模型:

¥Next, inside our function we'll define our chat model of choice:

const llm = new ChatOpenAI({
model: "gpt-4o-2024-05-13",
temperature: 0,
});

接下来,我们将使用 ai 包提供的 createStreamableValue 辅助函数来创建一个可流式传输的值:

¥Next, we'll use the createStreamableValue helper function provided by the ai package to create a streamable value:

const stream = createStreamableValue();

这在我们稍后开始将数据流式传输回客户端时非常重要。

¥This will be very important later on when we start streaming data back to the client.

接下来,让我们定义包含代理逻辑的异步函数:

¥Next, lets define our async function inside which contains the agent logic:

  (async () => {
const tools = [new TavilySearchResults({ maxResults: 1 })];

const prompt = await pull<ChatPromptTemplate>(
"hwchase17/openai-tools-agent",
);

const agent = createToolCallingAgent({
llm,
tools,
prompt,
});

const agentExecutor = new AgentExecutor({
agent,
tools,
});
tip

langchain 版本 0.2.8 开始,createToolCallingAgent 函数现在支持 OpenAI 格式的工具

¥As of langchain version 0.2.8, the createToolCallingAgent function now supports OpenAI-formatted tools.

这里你可以看到我们正在做的几件事:

¥Here you can see we're doing a few things:

第一步是(我们只使用一个工具),并从 LangChain 提示中心提取我们的提示。

¥The first is we're defining our list of tools (in this case we're only using a single tool) and pulling in our prompt from the LangChain prompt hub.

之后,我们将 LLM、工具和提示传递给 createToolCallingAgent 函数,该函数将构建并返回一个可运行的代理。然后将其传递到 AgentExecutor 类,该类将处理代理的执行和流式传输。

¥After that, we're passing our LLM, tools and prompt to the createToolCallingAgent function, which will construct and return a runnable agent. This is then passed into the AgentExecutor class, which will handle the execution & streaming of our agent.

最后,我们将调用 .streamEvents 并将流数据传回我们上面定义的 stream 变量。

¥Finally, we'll call .streamEvents and pass our streamed data back to the stream variable we defined above,

    const streamingEvents = agentExecutor.streamEvents(
{ input },
{ version: "v2" },
);

for await (const item of streamingEvents) {
stream.update(JSON.parse(JSON.stringify(item, null, 2)));
}

stream.done();
})();

如上所示,我们通过字符串化和解析数据做了一些有点古怪的事情。这是由于 RSC 流代码中的一个错误造成的,但是如果你像上面一样进行字符串化和解析,则不会遇到这种情况。

¥As you can see above, we're doing something a little wacky by stringifying and parsing our data. This is due to a bug in the RSC streaming code, however if you stringify and parse like we are above, you shouldn't experience this.

最后,在函数底部返回流值:

¥Finally, at the bottom of the function return the stream value:

return { streamData: stream.value };

实现服务器操作后,我们可以在客户端函数中添加几行代码来请求和传输这些数据:

¥Once we've implemented our server action, we can add a couple lines of code in our client function to request and stream this data:

首先,添加必要的导入:

¥First, add the necessary imports:

"use client";

import { useState } from "react";
import { readStreamableValue } from "ai/rsc";
import { runAgent } from "./action";

然后在 Page 函数中,调用 runAgent 函数非常简单:

¥Then inside our Page function, calling the runAgent function is straightforward:

export default function Page() {
const [input, setInput] = useState("");
const [data, setData] = useState<StreamEvent[]>([]);

async function handleSubmit(e: React.FormEvent) {
e.preventDefault();

const { streamData } = await runAgent(input);
for await (const item of readStreamableValue(streamData)) {
setData((prev) => [...prev, item]);
}
}
}

就是这样!你已经成功构建了一个将数据流式传输回客户端的代理。现在你可以运行应用并实时查看数据流。

¥That's it! You've successfully built an agent that streams data back to the client. You can now run your application and see the data streaming in real-time.