Gathering detailed insights and metrics for @llama-node/llama-cpp
Gathering detailed insights and metrics for @llama-node/llama-cpp
Gathering detailed insights and metrics for @llama-node/llama-cpp
Gathering detailed insights and metrics for @llama-node/llama-cpp
node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
@node-llama-cpp/linux-arm64
Prebuilt binary for node-llama-cpp for Linux arm64
@node-llama-cpp/linux-x64
Prebuilt binary for node-llama-cpp for Linux x64
@node-llama-cpp/linux-armv7l
Prebuilt binary for node-llama-cpp for Linux armv7l
Believe in AI democratization. llama for nodejs backed by llama-rs, llama.cpp and rwkv.cpp, work locally on your laptop CPU. support llama/alpaca/gpt4all/vicuna/rwkv model.
Cumulative downloads
Total Downloads
Last Day
38.7%
104
Compared to previous day
Last Week
24.7%
889
Compared to previous week
Last Month
-22.2%
5,678
Compared to previous month
Last Year
72.7%
93,680
Compared to previous year