Layerup 安全性
¥Layerup Security
Layerup 安全性 集成允许你保护对任何 LangChain LLM、LLM 链或 LLM 代理的调用。LLM 对象封装任何现有的 LLM 对象,从而在你的用户和 LLM 之间建立一个安全层。
¥The Layerup Security integration allows you to secure your calls to any LangChain LLM, LLM chain or LLM agent. The LLM object wraps around any existing LLM object, allowing for a secure layer between your users and your LLMs.
虽然 Layerup Security 对象被设计为 LLM,但它本身并不是一个 LLM,它只是对 LLM 进行了封装,使其能够适应与底层 LLM 相同的功能。
¥While the Layerup Security object is designed as an LLM, it is not actually an LLM itself, it simply wraps around an LLM, allowing it to adapt the same functionality as the underlying LLM.
设置
¥Setup
首先,你需要从 Layerup website 获取一个 Layerup Security 账户。
¥First, you'll need a Layerup Security account from the Layerup website.
接下来,通过 dashboard 创建一个项目,并复制你的 API 密钥。我们建议将你的 API 密钥放入项目环境中。
¥Next, create a project via the dashboard, and copy your API key. We recommend putting your API key in your project's environment.
安装 Layerup Security SDK:
¥Install the Layerup Security SDK:
- npm
- Yarn
- pnpm
npm install @layerup/layerup-security
yarn add @layerup/layerup-security
pnpm add @layerup/layerup-security
并安装 LangChain 社区:
¥And install LangChain Community:
- npm
- Yarn
- pnpm
npm install @langchain/community @langchain/core
yarn add @langchain/community @langchain/core
pnpm add @langchain/community @langchain/core
现在,你可以开始使用 Layerup Security 保护你的 LLM 调用了!
¥And now you're ready to start protecting your LLM calls with Layerup Security!
import {
LayerupSecurity,
LayerupSecurityOptions,
} from "@langchain/community/llms/layerup_security";
import { GuardrailResponse } from "@layerup/layerup-security";
import { OpenAI } from "@langchain/openai";
// Create an instance of your favorite LLM
const openai = new OpenAI({
modelName: "gpt-3.5-turbo",
openAIApiKey: process.env.OPENAI_API_KEY,
});
// Configure Layerup Security
const layerupSecurityOptions: LayerupSecurityOptions = {
// Specify a LLM that Layerup Security will wrap around
llm: openai,
// Layerup API key, from the Layerup dashboard
layerupApiKey: process.env.LAYERUP_API_KEY,
// Custom base URL, if self hosting
layerupApiBaseUrl: "https://api.uselayerup.com/v1",
// List of guardrails to run on prompts before the LLM is invoked
promptGuardrails: [],
// List of guardrails to run on responses from the LLM
responseGuardrails: ["layerup.hallucination"],
// Whether or not to mask the prompt for PII & sensitive data before it is sent to the LLM
mask: false,
// Metadata for abuse tracking, customer tracking, and scope tracking.
metadata: { customer: "example@uselayerup.com" },
// Handler for guardrail violations on the response guardrails
handlePromptGuardrailViolation: (violation: GuardrailResponse) => {
if (violation.offending_guardrail === "layerup.sensitive_data") {
// Custom logic goes here
}
return {
role: "assistant",
content: `There was sensitive data! I cannot respond. Here's a dynamic canned response. Current date: ${Date.now()}`,
};
},
// Handler for guardrail violations on the response guardrails
handleResponseGuardrailViolation: (violation: GuardrailResponse) => ({
role: "assistant",
content: `Custom canned response with dynamic data! The violation rule was ${violation.offending_guardrail}.`,
}),
};
const layerupSecurity = new LayerupSecurity(layerupSecurityOptions);
const response = await layerupSecurity.invoke(
"Summarize this message: my name is Bob Dylan. My SSN is 123-45-6789."
);
API Reference:
- LayerupSecurity from
@langchain/community/llms/layerup_security
- LayerupSecurityOptions from
@langchain/community/llms/layerup_security
- OpenAI from
@langchain/openai
相关
¥Related
大语言模型 概念指南
¥LLM conceptual guide
大语言模型 操作指南
¥LLM how-to guides