Gathering detailed insights and metrics for @llama-node/llama-cpp
Gathering detailed insights and metrics for @llama-node/llama-cpp
Gathering detailed insights and metrics for @llama-node/llama-cpp
Gathering detailed insights and metrics for @llama-node/llama-cpp
node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
@llama-node/rwkv-cpp
The repo is for one of the backend: [rwkv.cpp](https://github.com/saharNooby/rwkv.cpp)
@node-llama-cpp/linux-x64-cuda
Prebuilt binary for node-llama-cpp for Linux x64 with CUDA support
@node-llama-cpp/linux-arm64
Prebuilt binary for node-llama-cpp for Linux arm64
Believe in AI democratization. llama for nodejs backed by llama-rs, llama.cpp and rwkv.cpp, work locally on your laptop CPU. support llama/alpaca/gpt4all/vicuna/rwkv model.
Typescript
Module System
Min. Node Version
Node Version
NPM Version
73.9
Supply Chain
66.9
Quality
75.8
Maintenance
100
Vulnerability
100
License
Rust (45.49%)
TypeScript (30.97%)
JavaScript (20.66%)
Python (1.43%)
CSS (0.61%)
MDX (0.45%)
HTML (0.26%)
Makefile (0.12%)
C (0.01%)
Total Downloads
125,068
Last Day
50
Last Week
6,054
Last Month
11,600
Last Year
91,469
868 Stars
219 Commits
63 Forks
16 Watching
11 Branches
6 Contributors
Latest Version
0.1.6
Package Id
@llama-node/llama-cpp@0.1.6
Unpacked Size
9.89 MB
Size
3.48 MB
File Count
28
NPM Version
8.13.2
Node Version
16.14.2
Publised On
29 May 2023