Skip to main content

如何构建 LLM 生成的 UI

¥How to build an LLM generated UI

本指南将介绍使用 LangChain.js 构建生成式 UI 的一些高级概念和代码片段。要查看生成式 UI 点击此处访问我们的官方 LangChain Next.js 模板 的完整代码。

¥This guide will walk through some high level concepts and code snippets for building generative UI's using LangChain.js. To see the full code for generative UI, click here to visit our official LangChain Next.js template.

该示例实现了一个工具调用代理,它在将工具调用的中间输出流式传输到客户端时输出一个交互式 UI 元素。

¥The sample implements a tool calling agent, which outputs an interactive UI element when streaming intermediate outputs of tool calls to the client.

我们引入了两个实用程序来封装 AI SDK,以便更轻松地在可运行对象和工具调用中生成 React 元素:createRunnableUIstreamRunnableUI

¥We introduce two utilities which wraps the AI SDK to make it easier to yield React elements inside runnables and tool calls: createRunnableUI and streamRunnableUI.

  • streamRunnableUI 使用 streamEvents 方法执行提供的 Runnable,并通过 React Server Components 流将每个 stream 事件发送到客户端。

    ¥The streamRunnableUI executes the provided Runnable with streamEvents method and sends every stream event to the client via the React Server Components stream.

  • createRunnableUI 封装了 AI SDK 中的 createStreamableUI 函数,以便正确地连接到 Runnable 事件流。

    ¥The createRunnableUI wraps the createStreamableUI function from AI SDK to properly hook into the Runnable event stream.

用法如下:

¥The usage is then as follows:

"use server";

const tool = tool(
async (input, config) => {
const stream = await createRunnableUI(config);
stream.update(<div>Searching...</div>);

const result = await images(input);
stream.done(
<Images
images={result.images_results
.map((image) => image.thumbnail)
.slice(0, input.limit)}
/>
);

return `[Returned ${result.images_results.length} images]`;
},
{
name: "Images",
description: "A tool to search for images. input should be a search query.",
schema: z.object({
query: z.string().describe("The search query used to search for cats"),
limit: z.number().describe("The number of pictures shown to the user"),
}),
}
);

// add LLM, prompt, etc...

const tools = [tool];



export const agentExecutor = new AgentExecutor({
agent: createToolCallingAgent({ llm, tools, prompt }),
tools,
});
```





:::tip


As of `langchain` version `0.2.8`, the `createToolCallingAgent` function now supports [OpenAI-formatted tools](https://api.js.langchain.com/interfaces/langchain_core.language_models_base.ToolDefinition.html).


:::



```tsx agent.tsx
async function agent(inputs: {
input: string;
chat_history: [role: string, content: string][];
}) {
"use server";

return streamRunnableUI(agentExecutor, {
input: inputs.input,
chat_history: inputs.chat_history.map(
([role, content]) => new ChatMessage(content, role)
),
});
}



export const EndpointsContext = exposeEndpoints({ agent });
```



In order to ensure all of the client components are included in the bundle, we need to wrap all of the Server Actions into `exposeEndpoints` method. These endpoints will be accessible from the client via the Context API, seen in the `useActions` hook.

```tsx
"use client";
import type { EndpointsContext } from "./agent";

export default function Page() {
const actions = useActions<typeof EndpointsContext>();
const [node, setNode] = useState();

return (
<div>
{node}

<button
onClick={async () => {
setNode(await actions.agent({ input: "cats" }));
}}
>
Get images of cats
</button>
</div>
);
}