Installations
npm install @ifconfigla/fflate
Developer Guide
Typescript
Yes
Module System
CommonJS, ESM, UMD
Releases
Contributors
Languages
TypeScript (100%)
Developer
Download Statistics
Total Downloads
367
Last Day
2
Last Week
4
Last Month
14
Last Year
111
GitHub Statistics
2,370 Stars
127 Commits
85 Forks
18 Watching
2 Branches
8 Contributors
Bundle Size
31.68 kB
Minified
11.63 kB
Minified + Gzipped
Package Meta Information
Latest Version
0.8.1
Package Id
@ifconfigla/fflate@0.8.1
Unpacked Size
655.18 kB
Size
162.93 kB
File Count
26
Publised On
14 Aug 2023
Total Downloads
Cumulative downloads
Total Downloads
367
Last day
100%
2
Compared to previous day
Last week
33.3%
4
Compared to previous week
Last month
366.7%
14
Compared to previous month
Last year
-56.6%
111
Compared to previous year
Daily Downloads
Weekly Downloads
Monthly Downloads
Yearly Downloads
fflate
High performance (de)compression in an 8kB package
Forked to create deterministic archive files from NZIP
Why fflate?
fflate
(short for fast flate) is the fastest, smallest, and most versatile pure JavaScript compression and decompression library in existence, handily beating pako
, tiny-inflate
, and UZIP.js
in performance benchmarks while being multiple times more lightweight. Its compression ratios are often better than even the original Zlib C library. It includes support for DEFLATE, GZIP, and Zlib data. Data compressed by fflate
can be decompressed by other tools, and vice versa.
In addition to the base decompression and compression APIs, fflate
supports high-speed ZIP file archiving for an extra 3 kB. In fact, the compressor, in synchronous mode, compresses both more quickly and with a higher compression ratio than most compression software (even Info-ZIP, a C program), and in asynchronous mode it can utilize multiple threads to achieve over 3x the performance of virtually any other utility.
pako | tiny-inflate | UZIP.js | fflate | |
---|---|---|---|---|
Decompression performance | 1x | Up to 40% slower | Up to 40% faster | Up to 40% faster |
Compression performance | 1x | N/A | Up to 25% faster | Up to 50% faster |
Base bundle size (minified) | 45.6kB | 3kB (inflate only) | 14.2kB | 8kB (3kB for inflate only) |
Decompression support | ✅ | ✅ | ✅ | ✅ |
Compression support | ✅ | ❌ | ✅ | ✅ |
ZIP support | ❌ | ❌ | ✅ | ✅ |
Streaming support | ✅ | ❌ | ❌ | ✅ |
GZIP support | ✅ | ❌ | ❌ | ✅ |
Supports files up to 4GB | ✅ | ❌ | ❌ | ✅ |
Doesn't hang on error | ✅ | ❌ | ❌ | ✅ |
Dictionary support | ✅ | ❌ | ❌ | ✅ |
Multi-thread/Asynchronous | ❌ | ❌ | ❌ | ✅ |
Streaming ZIP support | ❌ | ❌ | ❌ | ✅ |
Uses ES Modules | ❌ | ❌ | ❌ | ✅ |
Demo
If you'd like to try fflate
for yourself without installing it, you can take a look at the browser demo. Since fflate
is a pure JavaScript library, it works in both the browser and Node.js (see Browser support for more info).
Usage
Install fflate
:
1npm i fflate # or yarn add fflate, or pnpm add fflate
Import:
1// I will assume that you use the following for the rest of this guide 2import * as fflate from 'fflate'; 3 4// However, you should import ONLY what you need to minimize bloat. 5// So, if you just need GZIP compression support: 6import { gzipSync } from 'fflate'; 7// Woo! You just saved 20 kB off your bundle with one line.
If your environment doesn't support ES Modules (e.g. Node.js):
1// Try to avoid this when using fflate in the browser, as it will import 2// all of fflate's components, even those that you aren't using. 3const fflate = require('fflate');
If you want to load from a CDN in the browser:
1<!-- 2You should use either UNPKG or jsDelivr (i.e. only one of the following) 3 4Note that tree shaking is completely unsupported from the CDN. If you want 5a small build without build tools, please ask me and I will make one manually 6with only the features you need. This build is about 31kB, or 11.5kB gzipped. 7--> 8<script src="https://unpkg.com/fflate@0.8.0"></script> 9<script src="https://cdn.jsdelivr.net/npm/fflate@0.8.0/umd/index.js"></script> 10<!-- Now, the global variable fflate contains the library --> 11 12<!-- If you're going buildless but want ESM, import from Skypack --> 13<script type="module"> 14 import * as fflate from 'https://cdn.skypack.dev/fflate@0.8.0?min'; 15</script>
If you are using Deno:
1// Don't use the ?dts Skypack flag; it isn't necessary for Deno support 2// The @deno-types comment adds TypeScript typings 3 4// @deno-types="https://cdn.skypack.dev/fflate@0.8.0/lib/index.d.ts" 5import * as fflate from 'https://cdn.skypack.dev/fflate@0.8.0?min';
If your environment doesn't support bundling:
1// Again, try to import just what you need 2 3// For the browser: 4import * as fflate from 'fflate/esm/browser.js'; 5// If the standard ESM import fails on Node (i.e. older version): 6import * as fflate from 'fflate/esm';
And use:
1// This is an ArrayBuffer of data
2const massiveFileBuf = await fetch('/aMassiveFile').then(
3 res => res.arrayBuffer()
4);
5// To use fflate, you need a Uint8Array
6const massiveFile = new Uint8Array(massiveFileBuf);
7// Note that Node.js Buffers work just fine as well:
8// const massiveFile = require('fs').readFileSync('aMassiveFile.txt');
9
10// Higher level means lower performance but better compression
11// The level ranges from 0 (no compression) to 9 (max compression)
12// The default level is 6
13const notSoMassive = fflate.zlibSync(massiveFile, { level: 9 });
14const massiveAgain = fflate.unzlibSync(notSoMassive);
15const gzipped = fflate.gzipSync(massiveFile, {
16 // GZIP-specific: the filename to use when decompressed
17 filename: 'aMassiveFile.txt',
18 // GZIP-specific: the modification time. Can be a Date, date string,
19 // or Unix timestamp
20 mtime: '9/1/16 2:00 PM'
21});
fflate
can autodetect a compressed file's format as well:
1const compressed = new Uint8Array( 2 await fetch('/GZIPorZLIBorDEFLATE').then(res => res.arrayBuffer()) 3); 4// Above example with Node.js Buffers: 5// Buffer.from('H4sIAAAAAAAAE8tIzcnJBwCGphA2BQAAAA==', 'base64'); 6 7const decompressed = fflate.decompressSync(compressed);
Using strings is easy with fflate
's string conversion API:
1const buf = fflate.strToU8('Hello world!');
2
3// The default compression method is gzip
4// Increasing mem may increase performance at the cost of memory
5// The mem ranges from 0 to 12, where 4 is the default
6const compressed = fflate.compressSync(buf, { level: 6, mem: 8 });
7
8// When you need to decompress:
9const decompressed = fflate.decompressSync(compressed);
10const origText = fflate.strFromU8(decompressed);
11console.log(origText); // Hello world!
If you need to use an (albeit inefficient) binary string, you can set the second argument to true
.
1const buf = fflate.strToU8('Hello world!');
2
3// The second argument, latin1, is a boolean that indicates that the data
4// is not Unicode but rather should be encoded and decoded as Latin-1.
5// This is useful for creating a string from binary data that isn't
6// necessarily valid UTF-8. However, binary strings are incredibly
7// inefficient and tend to double file size, so they're not recommended.
8const compressedString = fflate.strFromU8(
9 fflate.compressSync(buf),
10 true
11);
12const decompressed = fflate.decompressSync(
13 fflate.strToU8(compressedString, true)
14);
15const origText = fflate.strFromU8(decompressed);
16console.log(origText); // Hello world!
You can use streams as well to incrementally add data to be compressed or decompressed:
1// This example uses synchronous streams, but for the best experience 2// you'll definitely want to use asynchronous streams. 3 4let outStr = ''; 5const gzipStream = new fflate.Gzip({ level: 9 }, (chunk, isLast) => { 6 // accumulate in an inefficient binary string (just an example) 7 outStr += fflate.strFromU8(chunk, true); 8}); 9 10// You can also attach the data handler separately if you don't want to 11// do so in the constructor. 12gzipStream.ondata = (chunk, final) => { ... } 13 14// Since this is synchronous, all errors will be thrown by stream.push() 15gzipStream.push(chunk1); 16gzipStream.push(chunk2); 17 18... 19 20// You should mark the last chunk by using true in the second argument 21// In addition to being necessary for the stream to work properly, this 22// will also set the isLast parameter in the handler to true. 23gzipStream.push(lastChunk, true); 24 25console.log(outStr); // The compressed binary string is now available 26 27// The options parameter for compression streams is optional; you can 28// provide one parameter (the handler) or none at all if you set 29// deflateStream.ondata later. 30const deflateStream = new fflate.Deflate((chunk, final) => { 31 console.log(chunk, final); 32}); 33 34// If you want to create a stream from strings, use EncodeUTF8 35const utfEncode = new fflate.EncodeUTF8((data, final) => { 36 // Chaining streams together is done by pushing to the 37 // next stream in the handler for the previous stream 38 deflateStream.push(data, final); 39}); 40 41utfEncode.push('Hello'.repeat(1000)); 42utfEncode.push(' '.repeat(100)); 43utfEncode.push('world!'.repeat(10), true); 44 45// The deflateStream has logged the compressed data 46 47const inflateStream = new fflate.Inflate(); 48inflateStream.ondata = (decompressedChunk, final) => { ... }; 49 50let stringData = ''; 51 52// Streaming UTF-8 decode is available too 53const utfDecode = new fflate.DecodeUTF8((data, final) => { 54 stringData += data; 55}); 56 57// Decompress streams auto-detect the compression method, as the 58// non-streaming decompress() method does. 59const dcmpStrm = new fflate.Decompress((chunk, final) => { 60 console.log(chunk, 'was encoded with GZIP, Zlib, or DEFLATE'); 61 utfDecode.push(chunk, final); 62}); 63 64dcmpStrm.push(zlibJSONData1); 65dcmpStrm.push(zlibJSONData2, true); 66 67// This succeeds; the UTF-8 decoder chained with the unknown compression format 68// stream to reach a string as a sink. 69console.log(JSON.parse(stringData));
You can create multi-file ZIP archives easily as well. Note that by default, compression is enabled for all files, which is not useful when ZIPping many PNGs, JPEGs, PDFs, etc. because those formats are already compressed. You should either override the level on a per-file basis or globally to avoid wasting resources.
1// Note that the asynchronous version (see below) runs in parallel and
2// is *much* (up to 3x) faster for larger archives.
3const zipped = fflate.zipSync({
4 // Directories can be nested structures, as in an actual filesystem
5 'dir1': {
6 'nested': {
7 // You can use Unicode in filenames
8 '你好.txt': fflate.strToU8('Hey there!')
9 },
10 // You can also manually write out a directory path
11 'other/tmp.txt': new Uint8Array([97, 98, 99, 100])
12 },
13
14 // You can also provide compression options
15 'massiveImage.bmp': [aMassiveFile, {
16 level: 9,
17 mem: 12
18 }],
19 // PNG is pre-compressed; no need to waste time
20 'superTinyFile.png': [aPNGFile, { level: 0 }],
21
22 // Directories take options too
23 'exec': [{
24 'hello.sh': [fflate.strToU8('echo hello world'), {
25 // ZIP only: Set the operating system to Unix
26 os: 3,
27 // ZIP only: Make this file executable on Unix
28 attrs: 0o755 << 16
29 }]
30 }, {
31 // ZIP and GZIP support mtime (defaults to current time)
32 mtime: new Date('10/20/2020')
33 }]
34}, {
35 // These options are the defaults for all files, but file-specific
36 // options take precedence.
37 level: 1,
38 // Obfuscate last modified time by default
39 mtime: new Date('1/1/1980')
40});
41
42// If you write the zipped data to myzip.zip and unzip, the folder
43// structure will be outputted as:
44
45// myzip.zip (original file)
46// dir1
47// |-> nested
48// | |-> 你好.txt
49// |-> other
50// | |-> tmp.txt
51// massiveImage.bmp
52// superTinyFile.png
53
54// When decompressing, folders are not nested; all filepaths are fully
55// written out in the keys. For example, the return value may be:
56// { 'nested/directory/structure.txt': Uint8Array(2) [97, 97] }
57const decompressed = fflate.unzipSync(zipped, {
58 // You may optionally supply a filter for files. By default, all files in a
59 // ZIP archive are extracted, but a filter can save resources by telling
60 // the library not to decompress certain files
61 filter(file) {
62 // Don't decompress the massive image or any files larger than 10 MiB
63 return file.name != 'massiveImage.bmp' && file.originalSize <= 10_000_000;
64 }
65});
If you need extremely high performance or custom ZIP compression formats, you can use the highly-extensible ZIP streams. They take streams as both input and output. You can even use custom compression/decompression algorithms from other libraries, as long as they are defined in the ZIP spec (see section 4.4.5). If you'd like more info on using custom compressors, feel free to ask.
1// ZIP object 2// Can also specify zip.ondata outside of the constructor 3const zip = new fflate.Zip((err, dat, final) => { 4 if (!err) { 5 // output of the streams 6 console.log(dat, final); 7 } 8}); 9 10const helloTxt = new fflate.ZipDeflate('hello.txt', { 11 level: 9 12}); 13 14// Always add streams to ZIP archives before pushing to those streams 15zip.add(helloTxt); 16 17helloTxt.push(chunk1); 18// Last chunk 19helloTxt.push(chunk2, true); 20 21// ZipPassThrough is like ZipDeflate with level 0, but allows for tree shaking 22const nonStreamingFile = new fflate.ZipPassThrough('test.png'); 23zip.add(nonStreamingFile); 24// If you have data already loaded, just .push(data, true) 25nonStreamingFile.push(pngData, true); 26 27// You need to call .end() after finishing 28// This ensures the ZIP is valid 29zip.end(); 30 31// Unzip object 32const unzipper = new fflate.Unzip(); 33 34// This function will almost always have to be called. It is used to support 35// compression algorithms such as BZIP2 or LZMA in ZIP files if just DEFLATE 36// is not enough (though it almost always is). 37// If your ZIP files are not compressed, this line is not needed. 38unzipper.register(fflate.UnzipInflate); 39 40const neededFiles = ['file1.txt', 'example.json']; 41 42// Can specify handler in constructor too 43unzipper.onfile = file => { 44 // file.name is a string, file is a stream 45 if (neededFiles.includes(file.name)) { 46 file.ondata = (err, dat, final) => { 47 // Stream output here 48 console.log(dat, final); 49 }; 50 51 console.log('Reading:', file.name); 52 53 // File sizes are sometimes not set if the ZIP file did not encode 54 // them, so you may want to check that file.size != undefined 55 console.log('Compressed size', file.size); 56 console.log('Decompressed size', file.originalSize); 57 58 // You should only start the stream if you plan to use it to improve 59 // performance. Only after starting the stream will ondata be called. 60 // This method will throw if the compression method hasn't been registered 61 file.start(); 62 } 63}; 64 65// Try to keep under 5,000 files per chunk to avoid stack limit errors 66// For example, if all files are a few kB, multi-megabyte chunks are OK 67// If files are mostly under 100 bytes, 64kB chunks are the limit 68unzipper.push(zipChunk1); 69unzipper.push(zipChunk2); 70unzipper.push(zipChunk3, true);
As you may have guessed, there is an asynchronous version of every method as well. Unlike most libraries, this will cause the compression or decompression run in a separate thread entirely and automatically by using Web (or Node) Workers (as of now, Deno is unsupported). This means that the processing will not block the main thread at all.
Note that there is a significant initial overhead to using workers of about 70ms for each asynchronous function. For instance, if you call unzip
ten times, the overhead only applies for the first call, but if you call unzip
and zlib
, they will each cause the 70ms delay. Therefore, it's best to avoid the asynchronous API unless necessary. However, if you're compressing multiple large files at once, or the synchronous API causes the main thread to hang for too long, the callback APIs are an order of magnitude better.
1import {
2 gzip, zlib, AsyncGzip, zip, unzip, strFromU8,
3 Zip, AsyncZipDeflate, Unzip, AsyncUnzipInflate
4} from 'fflate';
5
6// Workers will work in almost any browser (even IE11!)
7// However, they fail below Node v12 without the --experimental-worker
8// CLI flag, and will fail entirely on Node below v10.
9
10// All of the async APIs use a node-style callback as so:
11const terminate = gzip(aMassiveFile, (err, data) => {
12 if (err) {
13 // The compressed data was likely corrupt, so we have to handle
14 // the error.
15 return;
16 }
17 // Use data however you like
18 console.log(data.length);
19});
20
21if (needToCancel) {
22 // The return value of any of the asynchronous APIs is a function that,
23 // when called, will immediately cancel the operation. The callback
24 // will not be called.
25 terminate();
26}
27
28// If you wish to provide options, use the second argument.
29
30// The consume option will render the data inside aMassiveFile unusable,
31// but can improve performance and dramatically reduce memory usage.
32zlib(aMassiveFile, { consume: true, level: 9 }, (err, data) => {
33 // Use the data
34});
35
36// Asynchronous streams are similar to synchronous streams, but the
37// handler has the error that occurred (if any) as the first parameter,
38// and they don't block the main thread.
39
40// Additionally, any buffers that are pushed in will be consumed and
41// rendered unusable; if you need to use a buffer you push in, you
42// should clone it first.
43const gzs = new AsyncGzip({ level: 9, mem: 12, filename: 'hello.txt' });
44let wasCallbackCalled = false;
45gzs.ondata = (err, chunk, final) => {
46 // Note the new err parameter
47 if (err) {
48 // Note that after this occurs, the stream becomes corrupt and must
49 // be discarded. You can't continue pushing chunks and expect it to
50 // work.
51 console.error(err);
52 return;
53 }
54 wasCallbackCalled = true;
55}
56gzs.push(chunk);
57
58// Since the stream is asynchronous, the callback will not be called
59// immediately. If such behavior is absolutely necessary (it shouldn't
60// be), use synchronous streams.
61console.log(wasCallbackCalled) // false
62
63// To terminate an asynchronous stream's internal worker, call
64// stream.terminate().
65gzs.terminate();
66
67// This is way faster than zipSync because the compression of multiple
68// files runs in parallel. In fact, the fact that it's parallelized
69// makes it faster than most standalone ZIP CLIs. The effect is most
70// significant for multiple large files; less so for many small ones.
71zip({ f1: aMassiveFile, 'f2.txt': anotherMassiveFile }, {
72 // The options object is still optional, you can still do just
73 // zip(archive, callback)
74 level: 6
75}, (err, data) => {
76 // Save the ZIP file
77});
78
79// unzip is the only async function without support for consume option
80// It is parallelized, so unzip is also often much faster than unzipSync
81unzip(aMassiveZIPFile, (err, unzipped) => {
82 // If the archive has data.xml, log it here
83 console.log(unzipped['data.xml']);
84 // Conversion to string
85 console.log(strFromU8(unzipped['data.xml']))
86});
87
88// Streaming ZIP archives can accept asynchronous streams. This automatically
89// uses multicore compression.
90const zip = new Zip();
91zip.ondata = (err, chunk, final) => { ... };
92// The JSON and BMP are compressed in parallel
93const exampleFile = new AsyncZipDeflate('example.json');
94zip.add(exampleFile);
95exampleFile.push(JSON.stringify({ large: 'object' }), true);
96const exampleFile2 = new AsyncZipDeflate('example2.bmp', { level: 9 });
97zip.add(exampleFile2);
98exampleFile2.push(ec2a);
99exampleFile2.push(ec2b);
100exampleFile2.push(ec2c);
101...
102exampleFile2.push(ec2Final, true);
103zip.end();
104
105// Streaming Unzip should register the asynchronous inflation algorithm
106// for parallel processing.
107const unzip = new Unzip(stream => {
108 if (stream.name.endsWith('.json')) {
109 stream.ondata = (err, chunk, final) => { ... };
110 stream.start();
111
112 if (needToCancel) {
113 // To cancel these streams, call .terminate()
114 stream.terminate();
115 }
116 }
117});
118unzip.register(AsyncUnzipInflate);
119unzip.push(data, true);
See the documentation for more detailed information about the API.
Bundle size estimates
The bundle size measurements for fflate
on sites like Bundlephobia include every feature of the library and should be seen as an upper bound. As long as you are using tree shaking or dead code elimination, this table should give you a general idea of fflate
's bundle size for the features you need.
The maximum bundle size that is possible with fflate
is about 31kB (11.5kB gzipped) if you use every single feature, but feature parity with pako
is only around 10kB (as opposed to 45kB from pako
). If your bundle size increases dramatically after adding fflate
, please create an issue.
Feature | Bundle size (minified) | Nearest competitor |
---|---|---|
Decompression | 3kB | tiny-inflate |
Compression | 5kB | UZIP.js , 2.84x larger |
Async decompression | 4kB (1kB + raw decompression) | N/A |
Async compression | 6kB (1kB + raw compression) | N/A |
ZIP decompression | 5kB (2kB + raw decompression) | UZIP.js , 2.84x larger |
ZIP compression | 7kB (2kB + raw compression) | UZIP.js , 2.03x larger |
GZIP/Zlib decompression | 4kB (1kB + raw decompression) | pako , 11.4x larger |
GZIP/Zlib compression | 5kB (1kB + raw compression) | pako , 9.12x larger |
Streaming decompression | 4kB (1kB + raw decompression) | pako , 11.4x larger |
Streaming compression | 5kB (1kB + raw compression) | pako , 9.12x larger |
What makes fflate
so fast?
Many JavaScript compression/decompression libraries exist. However, the most popular one, pako
, is merely a clone of Zlib rewritten nearly line-for-line in JavaScript. Although it is by no means poorly made, pako
doesn't recognize the many differences between JavaScript and C, and therefore is suboptimal for performance. Moreover, even when minified, the library is 45 kB; it may not seem like much, but for anyone concerned with optimizing bundle size (especially library authors), it's more weight than necessary.
Note that there exist some small libraries like tiny-inflate
for solely decompression, and with a minified size of 3 kB, it can be appealing; however, its performance is lackluster, typically 40% worse than pako
in my tests.
UZIP.js
is both faster (by up to 40%) and smaller (14 kB minified) than pako
, and it contains a variety of innovations that make it excellent for both performance and compression ratio. However, the developer made a variety of tiny mistakes and inefficient design choices that make it imperfect. Moreover, it does not support GZIP or Zlib data directly; one must remove the headers manually to use UZIP.js
.
So what makes fflate
different? It takes the brilliant innovations of UZIP.js
and optimizes them while adding direct support for GZIP and Zlib data. And unlike all of the above libraries, it uses ES Modules to allow for partial builds through tree shaking, meaning that it can rival even tiny-inflate
in size while maintaining excellent performance. The end result is a library that, in total, weighs 8kB minified for the core build (3kB for decompression only and 5kB for compression only), is about 15% faster than UZIP.js
or up to 60% faster than pako
, and achieves the same or better compression ratio than the rest.
If you're willing to have 160 kB of extra weight and much less browser support, you could theoretically achieve more performance than fflate
with a WASM build of Zlib like wasm-flate
. However, per some tests I conducted, the WASM interpreters of major browsers are not fast enough as of December 2020 for wasm-flate
to be useful: fflate
is around 2x faster.
Before you decide that fflate
is the end-all compression library, you should note that JavaScript simply cannot rival the performance of a native program. If you're only using Node.js, it's probably better to use the native Zlib bindings, which tend to offer the best performance. Though note that even against Zlib, fflate
is only around 30% slower in decompression and 10% slower in compression, and can still achieve better compression ratios!
Browser support
fflate
makes heavy use of typed arrays (Uint8Array
, Uint16Array
, etc.). Typed arrays can be polyfilled at the cost of performance, but the most recent browser that doesn't support them is from 2011, so I wouldn't bother.
The asynchronous APIs also use Worker
, which is not supported in a few browsers (however, the vast majority of browsers that support typed arrays support Worker
).
Other than that, fflate
is completely ES3, meaning you probably won't even need a bundler to use it.
Testing
You can validate the performance of fflate
with npm
/yarn
/pnpm
test
. It validates that the module is working as expected, ensures the outputs are no more than 5% larger than competitors at max compression, and outputs performance metrics to test/results
.
Note that the time it takes for the CLI to show the completion of each test is not representative of the time each package took, so please check the JSON output if you want accurate measurements.
License
This software is MIT Licensed, with special exemptions for projects and organizations as noted below:
- SheetJS is exempt from MIT licensing and may license any source code from this software under the BSD Zero Clause License
No vulnerabilities found.
Reason
no binaries found in the repo
Reason
license file detected
Details
- Info: project has a license file: LICENSE:0
- Info: FSF or OSI recognized license: MIT License: LICENSE:0
Reason
3 existing vulnerabilities detected
Details
- Warn: Project is vulnerable to: GHSA-grv7-fg5c-xmjg
- Warn: Project is vulnerable to: GHSA-952p-6rrq-rcjv
- Warn: Project is vulnerable to: GHSA-7hpj-7hhx-2fgx
Reason
Found 5/30 approved changesets -- score normalized to 1
Reason
0 commit(s) and 2 issue activity found in the last 90 days -- score normalized to 1
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
security policy file not detected
Details
- Warn: no security policy file detected
- Warn: no security file to analyze
- Warn: no security file to analyze
- Warn: no security file to analyze
Reason
project is not fuzzed
Details
- Warn: no fuzzer integrations found
Reason
branch protection not enabled on development/release branches
Details
- Warn: branch protection not enabled for branch 'master'
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
- Warn: 0 commits out of 5 are checked with a SAST tool
Score
2.9
/10
Last Scanned on 2025-01-27
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More