Skip to main content

Vespa Retriever

这展示了如何使用 Vespa.ai 作为 LangChain 检索器。Vespa.ai 是一个高效的结构化文本和向量搜索平台。有关更多信息,请参阅 Vespa.ai

¥This shows how to use Vespa.ai as a LangChain retriever. Vespa.ai is a platform for highly efficient structured text and vector search. Please refer to Vespa.ai for more information.

以下内容设置了一个检索器,用于从 Vespa 的文档搜索结果中获取结果:

¥The following sets up a retriever that fetches results from Vespa's documentation search:

import { VespaRetriever } from "@langchain/community/retrievers/vespa";

export const run = async () => {
const url = "https://doc-search.vespa.oath.cloud";
const query_body = {
yql: "select content from paragraph where userQuery()",
hits: 5,
ranking: "documentation",
locale: "en-us",
};
const content_field = "content";

const retriever = new VespaRetriever({
url,
auth: false,
query_body,
content_field,
});

const result = await retriever.invoke("what is vespa?");
console.log(result);
};

API Reference:

这里,使用 documentation 作为排名方法,从 paragraph 文档类型中的 content 字段检索最多 5 个结果。userQuery() 将被 LangChain 传递的实际查询替换。

¥Here, up to 5 results are retrieved from the content field in the paragraph document type, using documentation as the ranking method. The userQuery() is replaced with the actual query passed from LangChain.

有关更多信息,请参阅 pyvespa 文档

¥Please refer to the pyvespa documentation for more information.

URL 是 Vespa 应用的端点。你可以连接到任何 Vespa 端点,无论是远程服务还是使用 Docker 的本地实例。但是,大多数 Vespa Cloud 实例都受 mTLS 保护。如果你遇到这种情况,例如,你可以设置一个 CloudFlare Worker,其中包含连接到实例所需的凭据。

¥The URL is the endpoint of the Vespa application. You can connect to any Vespa endpoint, either a remote service or a local instance using Docker. However, most Vespa Cloud instances are protected with mTLS. If this is your case, you can, for instance set up a CloudFlare Worker that contains the necessary credentials to connect to the instance.

现在你可以返回结果并继续在 LangChain 中使用它们。

¥Now you can return the results and continue using them in LangChain.

¥Related