Skip to main content

LangChain 表达式语言 (LCEL)

¥LangChain Expression Language (LCEL)

LangChain 表达式语言 (LCEL) 采用 declarative 方法从现有的 Runnable 构建新的 Runnables

¥The LangChain Expression Language (LCEL) takes a declarative approach to building new Runnables from existing Runnables.

这意味着你只需描述你希望发生什么,而不是如何发生,从而允许 LangChain 优化链的运行时执行。

¥This means that you describe what you want to happen, rather than how you want it to happen, allowing LangChain to optimize the run-time execution of the chains.

我们经常将使用 LCEL 创建的 Runnable 称为 "chain"。需要记住的是,"chain" 就是 Runnable,并且它实现了完整的 Runnable 接口

¥We often refer to a Runnable created using LCEL as a "chain". It's important to remember that a "chain" is Runnable and it implements the full Runnable Interface.

note
  • LCEL 备忘单 展示了涉及 Runnable 接口和 LCEL 表达式的常见模式。

    ¥The LCEL cheatsheet shows common patterns that involve the Runnable interface and LCEL expressions.

  • 请参阅以下 操作指南 列表,其中涵盖了使用 LCEL 的常见任务。

    ¥Please see the following list of how-to guides that cover common tasks with LCEL.

  • 内置 Runnables 列表可在 LangChain 核心 API 参考 中找到。在使用 LCEL 在 LangChain 中编写自定义 "chains" 时,这些 Runnable 中的许多都很有用。

    ¥A list of built-in Runnables can be found in the LangChain Core API Reference. Many of these Runnables are useful when composing custom "chains" in LangChain using LCEL.

LCEL 的优势

¥Benefits of LCEL

LangChain 通过多种方式优化了使用 LCEL 构建的链的运行时执行:

¥LangChain optimizes the run-time execution of chains built with LCEL in a number of ways:

  • 优化并行执行:使用 RunnableParallel 并行运行 Runnable,或使用 Runnable 批处理 API 通过给定链并行运行多个输入。并行执行可以显著降低延迟,因为处理可以并行进行,而不是顺序进行。

    ¥Optimize parallel execution: Run Runnables in parallel using RunnableParallel or run multiple inputs through a given chain in parallel using the Runnable Batch API. Parallel execution can significantly reduce the latency as processing can be done in parallel instead of sequentially.

  • 简化流媒体:LCEL 链可以进行流式传输,从而允许在执行链时进行增量输出。LangChain 可以优化输出流,以最大限度地缩短第一个 token 的时间(即从 聊天模型llm 输出第一个块所需的时间)。

    ¥Simplify streaming: LCEL chains can be streamed, allowing for incremental output as the chain is executed. LangChain can optimize the streaming of the output to minimize the time-to-first-token(time elapsed until the first chunk of output from a chat model or llm comes out).

其他优势包括:

¥Other benefits include:

  • 无缝 LangSmith 追踪 随着你的链条变得越来越复杂,了解每一步究竟发生了什么变得越来越重要。使用 LCEL,所有步骤都会自动记录到 LangSmith,以实现最大程度的可观察性和可调试性。

    ¥Seamless LangSmith tracing As your chains get more and more complex, it becomes increasingly important to understand what exactly is happening at every step. With LCEL, all steps are automatically logged to LangSmith for maximum observability and debuggability.

  • 标准 API:由于所有链都是使用 Runnable 接口构建的,因此它们的使用方式与任何其他 Runnable 相同。

    ¥Standard API: Because all chains are built using the Runnable interface, they can be used in the same way as any other Runnable.

  • 可使用 LangServe 部署:使用 LCEL 构建的链可以部署用于生产用途。

    ¥Deployable with LangServe: Chains built with LCEL can be deployed using for production use.

我应该使用 LCEL 吗?

¥Should I use LCEL?

LCEL 是 编排解决方案 - 它允许 LangChain 以优化的方式处理链的运行时执行。

¥LCEL is an orchestration solution -- it allows LangChain to handle run-time execution of chains in an optimized way.

虽然我们看到用户在生产环境中运行包含数百个步骤的链,但我们通常建议使用 LCEL 来完成更简单的编排任务。当应用需要复杂的状态管理、分支、循环或多个代理时,我们建议用户使用 LangGraph

¥While we have seen users run chains with hundreds of steps in production, we generally recommend using LCEL for simpler orchestration tasks. When the application requires complex state management, branching, cycles or multiple agents, we recommend that users take advantage of LangGraph.

在 LangGraph 中,用户定义图表来指定应用的流程。这允许用户在需要 LCEL 时,在各个节点内继续使用 LCEL,同时轻松定义更易读、更易维护的复杂编排逻辑。

¥In LangGraph, users define graphs that specify the flow of the application. This allows users to keep using LCEL within individual nodes when LCEL is needed, while making it easy to define complex orchestration logic that is more readable and maintainable.

以下是一些指导原则:

¥Here are some guidelines:

  • 如果你正在进行单个 LLM 调用,则不需要 LCEL;直接调用底层 聊天模型

    ¥If you are making a single LLM call, you don't need LCEL; instead call the underlying chat model directly.

  • 如果你有一个简单的链(例如,提示符 + llm + 解析器,简单的检索设置等),并且你正在利用 LCEL 的优势,那么 LCEL 是一个不错的选择。

    ¥If you have a simple chain (e.g., prompt + llm + parser, simple retrieval set up etc.), LCEL is a reasonable fit, if you're taking advantage of the LCEL benefits.

  • 如果你正在构建复杂的链(例如,包含分支、循环、多个代理等),请改用 LangGraph。请记住,你始终可以在 LangGraph 中的各个节点内使用 LCEL。

    ¥If you're building a complex chain (e.g., with branching, cycles, multiple agents, etc.) use LangGraph instead. Remember that you can always use LCEL within individual nodes in LangGraph.

组合原语

¥Composition Primitives

LCEL 链是通过将现有的 Runnables 组合在一起构建的。两个主要的合成原语是 RunnableSequenceRunnableParallel

¥LCEL chains are built by composing existing Runnables together. The two main composition primitives are RunnableSequence and RunnableParallel.

许多其他组合原语(例如 RunnableAssign)可以被认为是这两个原语的变体。

¥Many other composition primitives (e.g., RunnableAssign) can be thought of as variations of these two primitives.

note

你可以在 LangChain 核心 API 参考 中找到所有组合原语的列表。

¥You can find a list of all composition primitives in the LangChain Core API Reference.

RunnableSequence

RunnableSequence 是一个组合原语,允许你按顺序 "chain" 多个可运行对象,其中一个可运行对象的输出作为下一个可运行对象的输入。

¥RunnableSequence is a composition primitive that allows you "chain" multiple runnables sequentially, with the output of one runnable serving as the input to the next.

import { RunnableSequence } from "@langchain/core/runnables";
const chain = new RunnableSequence({
first: runnable1,
// Optional, use if you have more than two runnables
// middle: [...],
last: runnable2,
});

使用一些输入调用 chain

¥Invoking the chain with some input:

const finalOutput = await chain.invoke(someInput);

对应以下内容:

¥corresponds to the following:

const output1 = await runnable1.invoke(someInput);
const finalOutput = await runnable2.invoke(output1);
note

runnable1runnable2 是你想要链接在一起的任何 Runnable 的占位符。

¥runnable1 and runnable2 are placeholders for any Runnable that you want to chain together.

RunnableParallel

RunnableParallel 是一个组合原语,允许你并发运行多个可运行对象,每个可运行对象使用相同的输入。

¥RunnableParallel is a composition primitive that allows you to run multiple runnables concurrently, with the same input provided to each.

import { RunnableParallel } from "@langchain/core/runnables";
const chain = new RunnableParallel({
key1: runnable1,
key2: runnable2,
});

使用一些输入调用 chain

¥Invoking the chain with some input:

const finalOutput = await chain.invoke(someInput);

将生成一个 finalOutput 对象,其键与输入对象相同,但值将被相应可运行对象的输出替换。

¥Will yield a finalOutput object with the same keys as the input object, but with the values replaced by the output of the corresponding runnable.

{
key1: await runnable1.invoke(someInput),
key2: await runnable2.invoke(someInput),
}

回想一下,可运行函数是并行执行的,因此虽然结果与上面显示的对象理解相同,但执行时间要快得多。

¥Recall, that the runnables are executed in parallel, so while the result is the same as object comprehension shown above, the execution time is much faster.

组合语法

¥Composition Syntax

RunnableSequenceRunnableParallel 的用法非常常见,因此我们创建了一种简写语法来使用它们。这有助于使代码更具可读性和简洁性。

¥The usage of RunnableSequence and RunnableParallel is so common that we created a shorthand syntax for using them. This helps to make the code more readable and concise.

pipe 方法。

¥The pipe method.

你可以使用 .pipe(runnable) 方法将可运行对象 pipe 连接在一起。

¥You can pipe runnables together using the .pipe(runnable) method.

const chain = runnable1.pipe(runnable2);

等同于:

¥is Equivalent to:

const chain = new RunnableSequence({
first: runnable1,
last: runnable2,
});

RunnableLambda 函数

¥RunnableLambda functions

你可以通过 RunnableLambda 类定义可运行的通用 TypeScript 函数。

¥You can define generic TypeScript functions are runnables through the RunnableLambda class.

const someFunc = RunnableLambda.from((input) => {
return input;
});

const chain = someFunc.pipe(runnable1);

旧链

¥Legacy chains

LCEL 旨在为传统子类链(例如 LLMChainConversationalRetrievalChain)提供行为和定制方面的一致性。许多遗留的链隐藏了重要的细节,例如提示符。随着越来越多可行模型的出现,定制变得越来越重要。

¥LCEL aims to provide consistency around behavior and customization over legacy subclassed chains such as LLMChain and ConversationalRetrievalChain. Many of these legacy chains hide important details like prompts, and as a wider variety of viable models emerge, customization has become more and more important.

有关如何使用 LCEL 执行特定任务的指南,请查看 相关操作指南指南

¥For guides on how to do specific tasks with LCEL, check out the relevant how-to guides.