Skip to main content

Datadog LLM 可观察性

¥Datadog LLM Observability

danger

LLM 可观察性目前处于公开测试阶段,其 API 可能会发生变化。

¥LLM Observability is in public beta, and its API is subject to change.

使用 Datadog LLM 可观察性,你可以监控、排除故障并评估基于 LLM 的应用(例如聊天机器人)。你可以调查问题的根本原因,监控运行性能,并评估 LLM 应用的质量、隐私和安全性。

¥With Datadog LLM Observability, you can monitor, troubleshoot, and evaluate your LLM-powered applications, such as chatbots. You can investigate the root cause of issues, monitor operational performance, and evaluate the quality, privacy, and safety of your LLM applications.

这是一个实验性的社区实现,Datadog 官方不支持。它基于 Datadog LLM 可观察性 API

¥This is an experimental community implementation, and it is not officially supported by Datadog. It is based on the Datadog LLM Observability API.

设置

¥Setup

npm install @langchain/community @langchain/core

用法

¥Usage

import { OpenAI } from "@langchain/openai";
import { DatadogLLMObsTracer } from "@langchain/community/experimental/callbacks/handlers/datadog";

/**
* This example demonstrates how to use the DatadogLLMObsTracer with the OpenAI model.
* It will produce a "llm" span with the input and output of the model inside the meta field.
*
* To run this example, you need to have a valid Datadog API key and OpenAI API key.
*/
export const run = async () => {
const model = new OpenAI({
model: "gpt-4",
temperature: 0.7,
maxTokens: 1000,
maxRetries: 5,
});

const res = await model.invoke(
"Question: What would be a good company name a company that makes colorful socks?\nAnswer:",
{
callbacks: [
new DatadogLLMObsTracer({
mlApp: "my-ml-app",
}),
],
}
);

console.log({ res });
};

API Reference: