Gathering detailed insights and metrics for node-llama-cpp
Gathering detailed insights and metrics for node-llama-cpp
Gathering detailed insights and metrics for node-llama-cpp
Gathering detailed insights and metrics for node-llama-cpp
@llama-node/llama-cpp
The repo is for one of the backend: [llama.cpp](https://github.com/ggerganov/llama.cpp)
@llama-node/rwkv-cpp
The repo is for one of the backend: [rwkv.cpp](https://github.com/saharNooby/rwkv.cpp)
@node-llama-cpp/linux-x64-cuda
Prebuilt binary for node-llama-cpp for Linux x64 with CUDA support
@node-llama-cpp/linux-arm64
Prebuilt binary for node-llama-cpp for Linux arm64
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
Typescript
Module System
Min. Node Version
Node Version
NPM Version
58.3
Supply Chain
94.9
Quality
90.4
Maintenance
100
Vulnerability
98.6
License
Updated on 06 Dec 2024
TypeScript (90.53%)
C++ (4.74%)
CSS (1.98%)
Vue (1.2%)
JavaScript (0.99%)
Shell (0.33%)
CMake (0.21%)
HTML (0.02%)
C (0.01%)