操作指南
¥How-to guides
这里你可以找到“我该如何……?”这类问题的答案。这些指南以目标为导向,具体明确;它们旨在帮助你完成特定任务。有关概念解释,请参阅 概念指南。有关端到端演练,请参阅 教程。有关每个类和函数的详细说明,请参阅 API 参考。
¥Here you'll find answers to “How do I….?” types of questions. These guides are goal-oriented and concrete; they're meant to help you complete a specific task. For conceptual explanations see Conceptual Guides. For end-to-end walkthroughs see Tutorials. For comprehensive descriptions of every class and function see API Reference.
安装
¥Installation
关键特性
¥Key features
这高亮了使用 LangChain 的核心功能。
¥This highlights functionality that is core to using LangChain.
LangChain 表达式语言 (LCEL)
¥LangChain Expression Language (LCEL)
LangChain 表达式语言是一种创建任意自定义链的方法。它基于 Runnable
协议构建。
¥LangChain Expression Language is a way to create arbitrary custom chains. It is built on the Runnable
protocol.
LCEL 备忘单:有关如何使用主要 LCEL 原语的快速概述。
¥LCEL cheatsheet: For a quick overview of how to use the main LCEL primitives.
组件
¥Components
这些是构建应用时可以使用的核心构建块。
¥These are the core building blocks you can use when building applications.
提示模板
¥Prompt templates
提示模板 负责将用户输入格式化为可传递给语言模型的格式。
¥Prompt Templates are responsible for formatting user input into a format that can be passed to a language model.
选择器示例
¥Example selectors
示例选择器 负责选择正确的几个示例传递给提示。
¥Example Selectors are responsible for selecting the correct few shot examples to pass to the prompt.
聊天模型
¥Chat models
聊天模型 是较新的语言模型,可以接收消息并输出消息。
¥Chat Models are newer forms of language models that take messages in and output a message.
消息
¥Messages
消息 是聊天模型的输入和输出。它们有一些 content
和一个 role
,用于描述消息的来源。
¥Messages are the input and output of chat models. They have some content
and a role
, which describes the source of the message.
LLMs
LangChain 所称的 LLMs 是旧版的语言模型,它接受一个字符串作为输入,并输出一个字符串。
¥What LangChain calls LLMs are older forms of language models that take a string in and output a string.
输出解析器
¥Output parsers
输出解析器 负责获取 LLM 的输出并将其解析为更结构化的格式。
¥Output Parsers are responsible for taking the output of an LLM and parsing into more structured format.
文档加载器
¥Document loaders
文档加载器 负责从各种来源加载文档。
¥Document Loaders are responsible for loading documents from a variety of sources.
文本分割器
¥Text splitters
文本分割器 将文档拆分成可用于检索的块。
¥Text Splitters take a document and split into chunks that can be used for retrieval.
嵌入模型
¥Embedding models
嵌入模型 获取一段文本并将其数字化表示。
¥Embedding Models take a piece of text and create a numerical representation of it.
向量存储
¥Vector stores
向量存储 是能够高效存储和检索嵌入的数据库。
¥Vector stores are databases that can efficiently store and retrieve embeddings.
检索器
¥Retrievers
检索器 负责接受查询并返回相关文档。
¥Retrievers are responsible for taking a query and returning relevant documents.
索引
¥Indexing
索引是使向量存储与底层数据源保持同步的过程。
¥Indexing is the process of keeping your vectorstore in-sync with the underlying data source.
工具
¥Tools
LangChain 工具 包含工具的描述(传递给语言模型)以及要调用的函数的实现。
¥LangChain Tools contain a description of the tool (to pass to the language model) as well as the implementation of the function to call.
代理
¥Agents
回调
¥Callbacks
回调 允许你连接到 LLM 应用执行的各个阶段。
¥Callbacks allow you to hook into the various stages of your LLM application's execution.
自定义
¥Custom
所有 LangChain 组件都可以轻松扩展以支持你自己的版本。
¥All of LangChain components can easily be extended to support your own versions.
生成式 UI
¥Generative UI
多模态
¥Multimodal
用例
¥Use cases
这些指南涵盖了特定用例的细节。
¥These guides cover use-case specific details.
使用 RAG 的问答
¥Q\&A with RAG
检索增强生成 (RAG) 是一种将 LLM 连接到外部数据源的方法。有关 RAG 的高级教程,请查看 此指南。
¥Retrieval Augmented Generation (RAG) is a way to connect LLMs to external sources of data. For a high-level tutorial on RAG, check out this guide.
提取
¥Extraction
提取是指使用 LLM 从非结构化文本中提取结构化信息。要查看有关提取的高级教程,请查看 此指南。
¥Extraction is when you use LLMs to extract structured information from unstructured text. For a high level tutorial on extraction, check out this guide.
聊天机器人
¥Chatbots
聊天机器人需要使用 LLM 进行对话。有关构建聊天机器人的高级教程,请查看 此指南。
¥Chatbots involve using an LLM to have a conversation. For a high-level tutorial on building chatbots, check out this guide.
查询分析
¥Query analysis
查询分析是使用 LLM 生成查询并发送给检索器的任务。有关查询分析的高级教程,请查看 此指南。
¥Query Analysis is the task of using an LLM to generate a query to send to a retriever. For a high-level tutorial on query analysis, check out this guide.
基于 SQL + CSV 的问答
¥Q\&A over SQL + CSV
你可以使用 LLM 对表格数据进行问答。有关高级教程,请查看 此指南。
¥You can use LLMs to do question answering over tabular data. For a high-level tutorial, check out this guide.
基于图数据库的问答
¥Q\&A over graph databases
你可以使用法学硕士 (LLM) 来基于图数据库进行问答。有关高级教程,请查看 此指南。
¥You can use an LLM to do question answering over graph databases. For a high-level tutorial, check out this guide.
LangGraph.js
LangGraph.js 是 LangChain 的一个扩展,旨在通过将步骤建模为图中的边和节点,使用 LLM 构建健壮且有状态的多参与者应用。
¥LangGraph.js is an extension of LangChain aimed at building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
LangGraph.js 文档目前托管在一个单独的网站上。你可以仔细阅读 LangGraph.js 操作指南(此处)。
¥LangGraph.js documentation is currently hosted on a separate site. You can peruse LangGraph.js how-to guides here.
LangSmith
LangSmith 允许你密切跟踪、监控和评估你的 LLM 应用。它与 LangChain 和 LangGraph.js 无缝集成,你可以在构建过程中使用它来检查和调试链的各个步骤。
¥LangSmith allows you to closely trace, monitor and evaluate your LLM application. It seamlessly integrates with LangChain and LangGraph.js, and you can use it to inspect and debug individual steps of your chains as you build.
LangSmith 文档托管在一个单独的网站上。你可以仔细阅读 LangSmith 操作指南(此处),但我们将重点介绍与 LangChain 特别相关的几个部分:
¥LangSmith documentation is hosted on a separate site. You can peruse LangSmith how-to guides here, but we'll highlight a few sections that are particularly relevant to LangChain below:
评估
¥Evaluation
性能评估是构建 LLM 应用的关键部分。LangSmith 协助你完成从创建数据集到定义指标再到运行评估器的每个步骤。
¥Evaluating performance is a vital part of building LLM-powered applications. LangSmith helps with every step of the process from creating a dataset to defining metrics to running evaluators.
要了解更多信息,请查看 LangSmith 评估操作指南。
¥To learn more, check out the LangSmith evaluation how-to guides.
追踪
¥Tracing
跟踪使你能够观察链和智能代理内部,这对于诊断问题至关重要。
¥Tracing gives you observability inside your chains and agents, and is vital in diagnosing issues.
你可以查看与跟踪相关的常规操作指南 在 LangSmith 的此部分中文档。
¥You can see general tracing-related how-tos in this section of the LangSmith docs.