Llama CPP
仅在 Node.js 上可用。
¥Only available on Node.js.
此模块基于 llama.cpp 的 node-llama-cpp Node.js 绑定,允许你使用本地运行的 LLM。这允许你使用能够在注意本电脑环境中运行的更小的量化模型,非常适合测试和临时填充想法,而无需支付费用!
¥This module is based on the node-llama-cpp Node.js bindings for llama.cpp, allowing you to work with a locally running LLM. This allows you to work with a much smaller quantized model capable of running on a laptop environment, ideal for testing and scratch padding ideas without running up a bill!
设置
¥Setup
你需要安装 node-llama-cpp 模块的主要版本 3
才能与本地模型通信。
¥You'll need to install major version 3
of the node-llama-cpp module to communicate with your local model.
- npm
- Yarn
- pnpm
npm install -S node-llama-cpp@3
yarn add node-llama-cpp@3
pnpm add node-llama-cpp@3
- npm
- Yarn
- pnpm
npm install @langchain/community @langchain/core
yarn add @langchain/community @langchain/core
pnpm add @langchain/community @langchain/core
你还需要一个本地 Llama 3 模型(或 node-llama-cpp 支持的模型)。你需要将此模型的路径作为参数的一部分传递给 LlamaCpp 模块(参见示例)。
¥You will also need a local Llama 3 model (or a model supported by node-llama-cpp). You will need to pass the path to this model to the LlamaCpp module as a part of the parameters (see example).
开箱即用的 node-llama-cpp
已针对在 MacOS 平台上运行进行了调整,并支持 Apple M 系列处理器的 Metal GPU。如果你需要关闭此功能或需要 CUDA 架构支持,请参阅 node-llama-cpp 上的文档。
¥Out-of-the-box node-llama-cpp
is tuned for running on a MacOS platform with support for the Metal GPU of Apple M-series of processors. If you need to turn this off or need support for the CUDA architecture then refer to the documentation at node-llama-cpp.
有关获取和准备 llama3
的建议,请参阅此模块的 LLM 版本文档。
¥For advice on getting and preparing llama3
see the documentation for the LLM version of this module.
致 LangChain.js 贡献者:如果你想运行与此模块相关的测试,则需要将本地模型的路径放入环境变量 LLAMA_PATH
中。
¥A note to LangChain.js contributors: if you want to run the tests associated with this module you will need to put the path to your local model in the environment variable LLAMA_PATH
.
用法
¥Usage
基本用法
¥Basic use
我们需要提供本地 Llama3 模型的路径,并且在此模块中,embeddings
属性始终设置为 true
。
¥We need to provide a path to our local Llama3 model, also the embeddings
property is always set to true
in this module.
import { LlamaCppEmbeddings } from "@langchain/community/embeddings/llama_cpp";
const llamaPath = "/Replace/with/path/to/your/model/gguf-llama3-Q4_0.bin";
const embeddings = await LlamaCppEmbeddings.initialize({
modelPath: llamaPath,
});
const res = embeddings.embedQuery("Hello Llama!");
console.log(res);
/*
[ 15043, 365, 29880, 3304, 29991 ]
*/
API Reference:
- LlamaCppEmbeddings from
@langchain/community/embeddings/llama_cpp
文档嵌入
¥Document embedding
import { LlamaCppEmbeddings } from "@langchain/community/embeddings/llama_cpp";
const llamaPath = "/Replace/with/path/to/your/model/gguf-llama3-Q4_0.bin";
const documents = ["Hello World!", "Bye Bye!"];
const embeddings = await LlamaCppEmbeddings.initialize({
modelPath: llamaPath,
});
const res = await embeddings.embedDocuments(documents);
console.log(res);
/*
[ [ 15043, 2787, 29991 ], [ 2648, 29872, 2648, 29872, 29991 ] ]
*/
API Reference:
- LlamaCppEmbeddings from
@langchain/community/embeddings/llama_cpp
相关
¥Related
嵌入模型 概念指南
¥Embedding model conceptual guide
嵌入模型 操作指南
¥Embedding model how-to guides