Skip to main content

为什么选择 LangChain?

¥Why LangChain?

langchain 软件包和 LangChain 公司的目标是让开发者尽可能轻松地构建应用。虽然 LangChain 最初是一个单独的开源软件包,但它已经发展成为一个公司和一个完整的生态系统。本页面将讨论整个 LangChain 生态系统。LangChain 生态系统中的大多数组件都可以单独使用。 - 所以,如果你对某些组件特别感兴趣,而对其他组件则不感兴趣,完全没问题!挑选你最喜欢的组件。

¥The goal of the langchain package and LangChain the company is to make it as easy possible for developers to build applications that reason. While LangChain originally started as a single open source package, it has evolved into a company and a whole ecosystem. This page will talk about the LangChain ecosystem as a whole. Most of the components within in the LangChain ecosystem can be used by themselves - so if you feel particularly drawn to certain components but not others, that is totally fine! Pick and choose whichever components you like best.

功能

¥Features

LangChain 旨在满足以下几个主要需求:

¥There are several primary needs that LangChain aims to address:

  1. 标准化组件接口:用于 AI 应用的 models相关组件 数量的不断增长,导致开发者需要学习和使用各种不同的 API。这种多样性使得开发者在构建应用时难以在提供程序之间切换或组合组件。LangChain 为关键组件提供了标准接口,方便用户在不同组件提供商之间切换。

    ¥Standardized component interfaces: The growing number of models and related components for AI applications has resulted in a wide variety of different APIs that developers need to learn and use. This diversity can make it challenging for developers to switch between providers or combine components when building applications. LangChain exposes a standard interface for key components, making it easy to switch between providers.

  2. 编排:随着应用变得越来越复杂,结合了多个组件和模型,日益增长的需求,需要有效地将这些元素连接到控制流 可以支持 完成各种任务编排 对于构建此类应用至关重要。

    ¥Orchestration: As applications become more complex, combining multiple components and models, there's a growing need to efficiently connect these elements into control flows that can accomplish diverse tasks. Orchestration is crucial for building such applications.

  3. 可观察性和评估:随着应用变得越来越复杂,理解其中发生的事情变得越来越困难。此外,选择悖论 可能会限制开发速度:例如,开发者经常想知道如何设计他们的提示符,或者哪个 LLM 能够最好地平衡准确性、延迟和成本。可观察性 和评估可以帮助开发者监控他们的应用,并快速自信地回答这些类型的问题。

    ¥Observability and evaluation: As applications become more complex, it becomes increasingly difficult to understand what is happening within them. Furthermore, the pace of development can become rate-limited by the paradox of choice: for example, developers often wonder how to engineer their prompt or which LLM best balances accuracy, latency, and cost. Observability and evaluations can help developers monitor their applications and rapidly answer these types of questions with confidence.

标准化组件接口

¥Standardized component interfaces

LangChain 为许多 AI 应用的核心组件提供了通用接口。例如,所有 聊天模型 都实现了 BaseChatModel 接口。这提供了一种与聊天模型交互的标准方式,支持重要但通常特定于提供商的功能,例如 工具调用结构化输出

¥LangChain provides common interfaces for components that are central to many AI applications. As an example, all chat models implement the BaseChatModel interface. This provides a standard way to interact with chat models, supporting important but often provider-specific features like tool calling and structured outputs.

示例:聊天模型

¥Example: chat models

许多 模型提供程序 支持 工具调用,这是许多应用(例如 agents)的一项关键功能,它允许开发者请求与特定模式匹配的模型响应。每个提供程序的 API 有所不同。LangChain 的 聊天模型 接口提供了一种将 tools 绑定到模型以支持 工具调用 的通用方法:

¥Many model providers support tool calling, a critical features for many applications (e.g., agents), that allows a developer to request model responses that match a particular schema. The APIs for each provider differ. LangChain's chat model interface provides a common way to bind tools to a model in order to support tool calling:

// Tool creation
const tools = [myTool];
// Tool binding
const modelWithTools = model.bindTools(tools);

同样,让模型生成 结构化输出 是一个极为常见的用例。提供不同的 API 来实现此目的,包括 JSON 模式或工具调用。LangChain 的 聊天模型 接口提供了一种使用 withStructuredOutput() 方法生成结构化输出的通用方法:

¥Similarly, getting models to produce structured outputs is an extremely common use case. Providers support different approaches for this, including JSON mode or tool calling, with different APIs. LangChain's chat model interface provides a common way to produce structured outputs using the withStructuredOutput() method:

// Define tool as a Zod schema
const schema = z.object({ ... });
// Bind schema to model
const modelWithStructure = model.withStructuredOutput(schema)

示例:retrievers

¥Example: retrievers

RAG 和 LLM 应用组件的上下文中,LangChain 的 retriever 接口提供了一种标准方法来连接许多不同类型的数据服务或数据库(例如,向量存储 或数据库)。检索器的底层实现取决于你要连接的数据存储或数据库的类型,但所有检索器都实现了 可运行接口,这意味着它们可以以通用的方式被调用。

¥In the context of RAG and LLM application components, LangChain's retriever interface provides a standard way to connect to many different types of data services or databases (e.g., vector stores or databases). The underlying implementation of the retriever depends on the type of data store or database you are connecting to, but all retrievers implement the runnable interface, meaning they can be invoked in a common manner.

const documents = await myRetriever.invoke("What is the meaning of life?");
[
Document({
pageContent: "The meaning of life is 42.",
metadata: { ... },
}),
Document({
pageContent: "The meaning of life is to use LangChain.",
metadata: { ... },
}),
...
]

编排

¥Orchestration

虽然对单个组件进行标准化很有用,但我们越来越多地看到开发者希望将组件组合成更复杂的应用。这催生了对 orchestration 的需求。此编排层应支持 LLM 应用的几个常见特性:

¥While standardization for individual components is useful, we've increasingly seen that developers want to combine components into more complex applications. This motivates the need for orchestration. There are several common characteristics of LLM applications that this orchestration layer should support:

  • 复杂控制流:应用需要复杂的模式,例如循环(例如,不断重复直到满足条件的循环)。

    ¥Complex control flow: The application requires complex patterns such as cycles (e.g., a loop that reiterates until a condition is met).

  • 持久化:应用需要维护 短期和/或长期记忆

    ¥Persistence: The application needs to maintain short-term and / or long-term memory.

  • 人机交互:应用需要人机交互,例如暂停、审核、编辑、批准某些步骤。

    ¥Human-in-the-loop: The application needs human interaction, e.g., pausing, reviewing, editing, approving certain steps.

针对这些复杂应用进行编排的推荐方法是使用 LangGraph。LangGraph 是一个库,通过将应用流程表示为一组节点和边,赋予开发者高度的控制力。LangGraph 内置支持 persistencehuman-in-the-loopmemory 和其他功能。它特别适合构建 agentsmulti-agent 应用。重要的是,单个 LangChain 组件可以在 LangGraph 节点中使用,但你也可以在不使用 LangChain 组件的情况下使用 LangGraph。

¥The recommended way to do orchestration for these complex applications is LangGraph. LangGraph is a library that gives developers a high degree of control by expressing the flow of the application as a set of nodes and edges. LangGraph comes with built-in support for persistence, human-in-the-loop, memory, and other features. It's particularly well suited for building agents or multi-agent applications. Importantly, individual LangChain components can be used within LangGraph nodes, but you can also use LangGraph without using LangChain components.

[Further reading]

查看我们的免费课程 LangGraph 简介,了解更多关于如何使用 LangGraph 构建复杂应用的信息。

¥Have a look at our free course, Introduction to LangGraph, to learn more about how to use LangGraph to build complex applications.

可观察性和评估

¥Observability and evaluation

由于存在选择悖论,AI 应用开发的速度通常受到高质量评估的限制。开发者经常想知道如何设计他们的提示,或者哪个 LLM 能够最好地平衡准确性、延迟和成本。高质量的跟踪和评估可以帮助你快速自信地回答这些类型的问题。LangSmith 是我们支持 AI 应用可观察性和评估的平台。请参阅我们关于 evaluationstracing 的概念指南,了解更多详情。

¥The pace of AI application development is often rate-limited by high-quality evaluations because there is a paradox of choice. Developers often wonder how to engineer their prompt or which LLM best balances accuracy, latency, and cost. High quality tracing and evaluations can help you rapidly answer these types of questions with confidence. LangSmith is our platform that supports observability and evaluation for AI applications. See our conceptual guides on evaluations and tracing for more details.

[Further reading]

请参阅我们关于 LangSmith 跟踪和评估 的视频播放列表,了解更多详情。

¥See our video playlist on LangSmith tracing and evaluations for more details.

结论

¥Conclusion

LangChain 为许多 AI 应用的核心组件提供了标准接口,这带来了一些特定的优势:

¥LangChain offers standard interfaces for components that are central to many AI applications, which offers a few specific advantages:

  • 易于更换提供商:它允许你在不更改底层代码的情况下替换不同的组件提供商。

    ¥Ease of swapping providers: It allows you to swap out different component providers without having to change the underlying code.

  • 高级功能:它为更高级的功能(例如 streaming工具调用)提供了常用方法。

    ¥Advanced features: It provides common methods for more advanced features, such as streaming and tool calling.

LangGraph 可以编排复杂的应用(例如 agents),并提供包含 persistencehuman-in-the-loopmemory 等功能。

¥LangGraph makes it possible to orchestrate complex applications (e.g., agents) and provide features like including persistence, human-in-the-loop, or memory.

LangSmith 通过提供特定于 LLM 的可观察性以及用于测试和评估应用的框架,让你能够自信地迭代应用。

¥LangSmith makes it possible to iterate with confidence on your applications, by providing LLM-specific observability and framework for testing and evaluating your application.