Skip to main content

如何跟踪令牌使用情况

¥How to track token usage

Prerequisites

本指南假设你熟悉以下概念:

¥This guide assumes familiarity with the following concepts:

本注意本介绍了如何跟踪特定 LLM 调用的令牌使用情况。目前只有部分提供商实现了此功能,包括 OpenAI。

¥This notebook goes over how to track your token usage for specific LLM calls. This is only implemented by some providers, including OpenAI.

以下是通过回调跟踪单个 LLM 调用的令牌使用情况的示例:

¥Here's an example of tracking token usage for a single LLM call via a callback:

npm install @langchain/openai @langchain/core
import { OpenAI } from "@langchain/openai";

const llm = new OpenAI({
model: "gpt-3.5-turbo-instruct",
callbacks: [
{
handleLLMEnd(output) {
console.log(JSON.stringify(output, null, 2));
},
},
],
});

await llm.invoke("Tell me a joke.");

/*
{
"generations": [
[
{
"text": "\n\nWhy don't scientists trust atoms?\n\nBecause they make up everything.",
"generationInfo": {
"finishReason": "stop",
"logprobs": null
}
}
]
],
"llmOutput": {
"tokenUsage": {
"completionTokens": 14,
"promptTokens": 5,
"totalTokens": 19
}
}
}
*/

API Reference:

如果将此模型传递给多次调用它的链或代理,它每次都会记录输出。

¥If this model is passed to a chain or agent that calls it multiple times, it will log an output each time.

后续步骤

¥Next steps

现在你已经了解了如何获取受支持的 LLM 提供商的令牌使用情况。

¥You've now seen how to get token usage for supported LLM providers.

接下来,查看本节中的其他操作指南,例如 如何实现你自己的自定义 LLM

¥Next, check out the other how-to guides in this section, like how to implement your own custom LLM.