AzionRetriever
概述
¥Overview
这将帮助你开始使用 AzionRetriever。有关所有 AzionRetriever 功能和配置的详细文档,请前往 API 参考。
¥This will help you getting started with the AzionRetriever. For detailed documentation of all AzionRetriever features and configurations head to the API reference.
集成详情
¥Integration details
Retriever | Self-host | Cloud offering | Package | [Py support] |
---|---|---|---|---|
AzionRetriever | ❌ | ❌ | @langchain/community | ❌ |
设置
¥Setup
要使用 AzionRetriever,你需要设置 AZION_TOKEN 环境变量。
¥To use the AzionRetriever, you need to set the AZION_TOKEN environment variable.
process.env.AZION_TOKEN = "your-api-key";
如果你在本指南中使用 OpenAI 嵌入,则还需要设置你的 OpenAI 密钥:
¥If you are using OpenAI embeddings for this guide, you’ll need to set your OpenAI key as well:
process.env.OPENAI_API_KEY = "YOUR_API_KEY";
如果你想自动追踪单个查询,也可以通过取消注释以下内容来设置你的 LangSmith API 密钥:
¥If you want to get automated tracing from individual queries, you can also set your LangSmith API key by uncommenting below:
// process.env.LANGSMITH_API_KEY = "<YOUR API KEY HERE>";
// process.env.LANGSMITH_TRACING = "true";
安装
¥Installation
此检索器位于 @langchain/community/retrievers/azion_edgesql
包中:
¥This retriever lives in the
@langchain/community/retrievers/azion_edgesql
package:
- npm
- yarn
- pnpm
npm i azion @langchain/openai @langchain/community
yarn add azion @langchain/openai @langchain/community
pnpm add azion @langchain/openai @langchain/community
实例化
¥Instantiation
现在我们可以实例化我们的检索器:
¥Now we can instantiate our retriever:
import { AzionRetriever } from "@langchain/community/retrievers/azion_edgesql";
import { OpenAIEmbeddings } from "@langchain/openai";
import { ChatOpenAI } from "@langchain/openai";
const embeddingModel = new OpenAIEmbeddings({
model: "text-embedding-3-small",
});
const chatModel = new ChatOpenAI({
model: "gpt-4o-mini",
apiKey: process.env.OPENAI_API_KEY,
});
const retriever = new AzionRetriever(embeddingModel, {
dbName: "langchain",
vectorTable: "documents", // table where the vector embeddings are stored
ftsTable: "documents_fts", // table where the fts index is stored
searchType: "hybrid", // search type to use for the retriever
ftsK: 2, // number of results to return from the fts index
similarityK: 2, // number of results to return from the vector index
metadataItems: ["language", "topic"],
filters: [{ operator: "=", column: "language", value: "en" }],
entityExtractor: chatModel,
}); // number of results to return from the vector index
用法
¥Usage
const query = "Australia";
await retriever.invoke(query);
[
Document {
pageContent: 'Australia s indigenous people have inhabited the continent for over 65,000 years',
metadata: { language: 'en', topic: 'history', searchtype: 'similarity' },
id: '3'
},
Document {
pageContent: 'Australia is a leader in solar energy adoption and renewable technology',
metadata: { language: 'en', topic: 'technology', searchtype: 'similarity' },
id: '5'
},
Document {
pageContent: 'Australia s tech sector is rapidly growing with innovation hubs in major cities',
metadata: { language: 'en', topic: 'technology', searchtype: 'fts' },
id: '7'
}
]
在链中使用
¥Use within a chain
与其他检索器一样,AzionRetriever 可以通过 chains 集成到 LLM 应用中。
¥Like other retrievers, AzionRetriever can be incorporated into LLM applications via chains.
我们需要一个 LLM 或聊天模型:
¥We will need a LLM or chat model:
Pick your chat model:
- Groq
- OpenAI
- Anthropic
- Google Gemini
- FireworksAI
- MistralAI
- VertexAI
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
Add environment variables
GROQ_API_KEY=your-api-key
Instantiate the model
import { ChatGroq } from "@langchain/groq";
const llm = new ChatGroq({
model: "llama-3.3-70b-versatile",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
Add environment variables
OPENAI_API_KEY=your-api-key
Instantiate the model
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
Add environment variables
ANTHROPIC_API_KEY=your-api-key
Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";
const llm = new ChatAnthropic({
model: "claude-3-5-sonnet-20240620",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/google-genai
yarn add @langchain/google-genai
pnpm add @langchain/google-genai
Add environment variables
GOOGLE_API_KEY=your-api-key
Instantiate the model
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
const llm = new ChatGoogleGenerativeAI({
model: "gemini-2.0-flash",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Add environment variables
FIREWORKS_API_KEY=your-api-key
Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";
const llm = new ChatFireworks({
model: "accounts/fireworks/models/llama-v3p1-70b-instruct",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
Add environment variables
MISTRAL_API_KEY=your-api-key
Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";
const llm = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";
const llm = new ChatVertexAI({
model: "gemini-1.5-flash",
temperature: 0
});
import { ChatPromptTemplate } from "@langchain/core/prompts";
import {
RunnablePassthrough,
RunnableSequence,
} from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";
import type { Document } from "@langchain/core/documents";
const prompt = ChatPromptTemplate.fromTemplate(`
Answer the question based only on the context provided.
Context: {context}
Question: {question}`);
const formatDocs = (docs: Document[]) => {
return docs.map((doc) => doc.pageContent).join("\n\n");
};
// See https://langchain.nodejs.cn/docs/tutorials/rag
const ragChain = RunnableSequence.from([
{
context: retriever.pipe(formatDocs),
question: new RunnablePassthrough(),
},
prompt,
llm,
new StringOutputParser(),
]);
await ragChain.invoke("Paris");
The context mentions that the 2024 Olympics are in Paris.
API 参考
¥API reference
有关所有 AzionRetriever 功能和配置的详细文档,请前往 API 参考。
¥For detailed documentation of all AzionRetriever features and configurations head to the API reference.