Installations
npm install lz4
Releases
Unable to fetch releases
Developer
pierrec
Developer Guide
Module System
CommonJS, UMD
Min. Node Version
>= 0.10
Typescript Support
No
Node Version
14.15.3
NPM Version
6.14.9
Statistics
438 Stars
139 Commits
98 Forks
24 Watching
4 Branches
18 Contributors
Updated on 09 Sept 2024
Languages
JavaScript (94.18%)
C++ (5.36%)
Python (0.41%)
Shell (0.05%)
Total Downloads
Cumulative downloads
Total Downloads
6,376,120
Last day
-27.2%
20,050
Compared to previous day
Last week
8.9%
125,833
Compared to previous week
Last month
39.1%
475,648
Compared to previous month
Last year
114.9%
3,290,835
Compared to previous year
Daily Downloads
Weekly Downloads
Monthly Downloads
Yearly Downloads
LZ4
LZ4 is a very fast compression and decompression algorithm. This nodejs module provides a Javascript implementation of the decoder as well as native bindings to the LZ4 functions. Nodejs Streams are also supported for compression and decompression.
NB. Version 0.2 does not support the legacy format, only the one as of "LZ4 Streaming Format 1.4". Use version 0.1 if required.
Build
With NodeJS:
1git clone https://github.com/pierrec/node-lz4.git 2cd node-lz4 3git submodule update --init --recursive 4npm install
Install
With NodeJS:
1npm install lz4
Within the browser, using build/lz4.js
:
1<script type="text/javascript" src="/path/to/lz4.js"></script> 2<script type="text/javascript"> 3// Nodejs-like Buffer built-in 4var Buffer = require('buffer').Buffer 5var LZ4 = require('lz4') 6 7// Some data to be compressed 8var data = 'Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.' 9data += data 10// LZ4 can only work on Buffers 11var input = Buffer.from(data) 12// Initialize the output buffer to its maximum length based on the input data 13var output = Buffer.alloc( LZ4.encodeBound(input.length) ) 14 15// block compression (no archive format) 16var compressedSize = LZ4.encodeBlock(input, output) 17// remove unnecessary bytes 18output = output.slice(0, compressedSize) 19 20console.log( "compressed data", output ) 21 22// block decompression (no archive format) 23var uncompressed = Buffer.alloc(input.length) 24var uncompressedSize = LZ4.decodeBlock(output, uncompressed) 25uncompressed = uncompressed.slice(0, uncompressedSize) 26 27console.log( "uncompressed data", uncompressed ) 28</script>
From github cloning, after having made sure that node and node-gyp are properly installed:
1npm i 2node-gyp rebuild
See below for more LZ4 functions.
Usage
Encoding
There are 2 ways to encode:
- asynchronous using nodejs Streams - slowest but can handle very large data sets (no memory limitations).
- synchronous by feeding the whole set of data - faster but is limited by the amount of memory
Asynchronous encoding
First, create an LZ4 encoding NodeJS stream with LZ4#createEncoderStream(options)
.
options
(Object): LZ4 stream options (optional)options.blockMaxSize
(Number): chunk size to use (default=4Mb)options.highCompression
(Boolean): use high compression (default=false)options.blockIndependence
(Boolean): (default=true)options.blockChecksum
(Boolean): add compressed blocks checksum (default=false)options.streamSize
(Boolean): add full LZ4 stream size (default=false)options.streamChecksum
(Boolean): add full LZ4 stream checksum (default=true)options.dict
(Boolean): use dictionary (default=false)options.dictId
(Integer): dictionary id (default=0)
The stream can then encode any data piped to it. It will emit a data
event on each encoded chunk, which can be saved into an output stream.
The following example shows how to encode a file test
into test.lz4
.
1var fs = require('fs') 2var lz4 = require('lz4') 3 4var encoder = lz4.createEncoderStream() 5 6var input = fs.createReadStream('test') 7var output = fs.createWriteStream('test.lz4') 8 9input.pipe(encoder).pipe(output)
Synchronous encoding
Read the data into memory and feed it to LZ4#encode(input[, options])
to decode an LZ4 stream.
input
(Buffer): data to encodeoptions
(Object): LZ4 stream options (optional)options.blockMaxSize
(Number): chunk size to use (default=4Mb)options.highCompression
(Boolean): use high compression (default=false)options.blockIndependence
(Boolean): (default=true)options.blockChecksum
(Boolean): add compressed blocks checksum (default=false)options.streamSize
(Boolean): add full LZ4 stream size (default=false)options.streamChecksum
(Boolean): add full LZ4 stream checksum (default=true)options.dict
(Boolean): use dictionary (default=false)options.dictId
(Integer): dictionary id (default=0)
1var fs = require('fs') 2var lz4 = require('lz4') 3 4var input = fs.readFileSync('test') 5var output = lz4.encode(input) 6 7fs.writeFileSync('test.lz4', output)
Decoding
There are 2 ways to decode:
- asynchronous using nodejs Streams - slowest but can handle very large data sets (no memory limitations)
- synchronous by feeding the whole LZ4 data - faster but is limited by the amount of memory
Asynchronous decoding
First, create an LZ4 decoding NodeJS stream with LZ4#createDecoderStream()
.
The stream can then decode any data piped to it. It will emit a data
event on each decoded sequence, which can be saved into an output stream.
The following example shows how to decode an LZ4 compressed file test.lz4
into test
.
1var fs = require('fs') 2var lz4 = require('lz4') 3 4var decoder = lz4.createDecoderStream() 5 6var input = fs.createReadStream('test.lz4') 7var output = fs.createWriteStream('test') 8 9input.pipe(decoder).pipe(output)
Synchronous decoding
Read the data into memory and feed it to LZ4#decode(input)
to produce an LZ4 stream.
input
(Buffer): data to decode
1var fs = require('fs') 2var lz4 = require('lz4') 3 4var input = fs.readFileSync('test.lz4') 5var output = lz4.decode(input) 6 7fs.writeFileSync('test', output)
Block level encoding/decoding
In some cases, it is useful to be able to manipulate an LZ4 block instead of an LZ4 stream. The functions to decode and encode are therefore exposed as:
LZ4#decodeBlock(input, output[, startIdx, endIdx])
(Number) >=0: uncompressed size, <0: error at offsetinput
(Buffer): data block to decodeoutput
(Buffer): decoded data blockstartIdx
(Number): input buffer start index (optional, default=0)endIdx
(Number): input buffer end index (optional, default=startIdx + input.length)
LZ4#encodeBound(inputSize)
(Number): maximum size for a compressed blockinputSize
(Number) size of the input, 0 if too large This is required to size the buffer for a block encoded data
LZ4#encodeBlock(input, output[, startIdx, endIdx])
(Number) >0: compressed size, =0: not compressibleinput
(Buffer): data block to encodeoutput
(Buffer): encoded data blockstartIdx
(Number): output buffer start index (optional, default=0)endIdx
(Number): output buffer end index (optional, default=startIdx + output.length)
LZ4#encodeBlockHC(input, output[, compressionLevel])
(Number) >0: compressed size, =0: not compressibleinput
(Buffer): data block to encode with high compressionoutput
(Buffer): encoded data blockcompressionLevel
(Number): compression level between 3 and 12 (optional, default=9)
Blocks do not have any magic number and are provided as is. It is useful to store somewhere the size of the original input for decoding. LZ4#encodeBlockHC() is not available as pure Javascript.
How it works
Restrictions / Issues
blockIndependence
property only supported fortrue
License
MIT
No vulnerabilities found.
Reason
0 existing vulnerabilities detected
Reason
license file detected
Details
- Info: project has a license file: LICENSE:0
- Info: FSF or OSI recognized license: MIT License: LICENSE:0
Reason
binaries present in source code
Details
- Warn: binary detected: bin/lz4c:1
Reason
Found 20/28 approved changesets -- score normalized to 7
Reason
project is archived
Details
- Warn: Repository is archived.
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
security policy file not detected
Details
- Warn: no security policy file detected
- Warn: no security file to analyze
- Warn: no security file to analyze
- Warn: no security file to analyze
Reason
project is not fuzzed
Details
- Warn: no fuzzer integrations found
Reason
branch protection not enabled on development/release branches
Details
- Warn: branch protection not enabled for branch 'master'
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
- Warn: 0 commits out of 22 are checked with a SAST tool
Score
3.8
/10
Last Scanned on 2024-11-18
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More