Gathering detailed insights and metrics for fs-capacitor
Gathering detailed insights and metrics for fs-capacitor
Gathering detailed insights and metrics for fs-capacitor
Gathering detailed insights and metrics for fs-capacitor
Filesystem-bufferred, passthrough stream that buffers indefinitely rather than propagate backpressure from downstream consumers.
npm install fs-capacitor
98.9
Supply Chain
100
Quality
76
Maintenance
100
Vulnerability
100
License
Module System
Min. Node Version
Typescript Support
Node Version
NPM Version
36 Stars
188 Commits
8 Forks
6 Watching
11 Branches
5 Contributors
Updated on 09 Jul 2024
TypeScript (100%)
Cumulative downloads
Total Downloads
Last day
-12.7%
114,927
Compared to previous day
Last week
-1.5%
690,262
Compared to previous week
Last month
10.5%
2,929,656
Compared to previous month
Last year
-19.9%
33,053,913
Compared to previous year
FS Capacitor is a filesystem buffer for finite node streams. It supports simultaneous read/write, and can be used to create multiple independent readable streams, each starting at the beginning of the buffer.
This is useful for file uploads and other situations where you want to avoid delays to the source stream, but have slow downstream transformations to apply:
1import fs from "fs"; 2import http from "http"; 3import { WriteStream } from "fs-capacitor"; 4 5http.createServer((req, res) => { 6 const capacitor = new WriteStream(); 7 const destination = fs.createWriteStream("destination.txt"); 8 9 // pipe data to the capacitor 10 req.pipe(capacitor); 11 12 // read data from the capacitor 13 capacitor 14 .createReadStream() 15 .pipe(/* some slow Transform streams here */) 16 .pipe(destination); 17 18 // read data from the very beginning 19 setTimeout(() => { 20 capacitor.createReadStream().pipe(/* elsewhere */); 21 22 // you can destroy a capacitor as soon as no more read streams are needed 23 // without worrying if existing streams are fully consumed 24 capacitor.destroy(); 25 }, 100); 26});
It is especially important to use cases like graphql-upload
where server code may need to stash earler parts of a stream until later parts have been processed, and needs to attach multiple consumers at different times.
FS Capacitor creates its temporary files in the directory ideneified by os.tmpdir()
and attempts to remove them:
writeStream.destroy()
has been called and all read streams are fully consumed or destroyedPlease do note that FS Capacitor does NOT release disk space as data is consumed, and therefore is not suitable for use with infinite streams or those larger than the filesystem.
FS Capacitor cleans up all of its temporary files before the process exits, by listening to the node process's exit
event. This event, however, is only emitted when the process is about to exit as a result of either:
When the node process receives a SIGINT
, SIGTERM
, or SIGHUP
signal and there is no handler, it will exit without emitting the exit
event.
Beginning in version 3, fs-capacitor will NOT listen for these signals. Instead, the application should handle these signals according to its own logic and call process.exit()
when it is ready to exit. This allows the application to implement its own graceful shutdown procedures, such as waiting for a stream to finish.
The following can be added to the application to ensure resources are cleaned up before a signal-induced exit:
1function shutdown() { 2 // Any sync or async graceful shutdown procedures can be run before exiting… 3 process.exit(0); 4} 5 6process.on("SIGINT", shutdown); 7process.on("SIGTERM", shutdown); 8process.on("SIGHUP", shutdown);
WriteStream
extends stream.Writable
new WriteStream(options: WriteStreamOptions)
Create a new WriteStream
instance.
.createReadStream(options?: ReadStreamOptions): ReadStream
Create a new ReadStream
instance attached to the WriteStream
instance.
Calling .createReadStream()
on a released WriteStream
will throw a ReadAfterReleasedError
error.
Calling .createReadStream()
on a destroyed WriteStream
will throw a ReadAfterDestroyedError
error.
As soon as a ReadStream
ends or is closed (such as by calling readStream.destroy()
), it is detached from its WriteStream
.
.release(): void
Release the WriteStream
's claim on the underlying resources. Once called, destruction of underlying resources is performed as soon as all attached ReadStream
s are removed.
.destroy(error?: ?Error): void
Destroy the WriteStream
and all attached ReadStream
s. If error
is present, attached ReadStream
s are destroyed with the same error.
.highWaterMark?: number
Uses node's default of 16384
(16kb). Optional buffer size at which the writable stream will begin returning false
. See node's docs for stream.Writable
. For the curious, node has a guide on backpressure in streams.
.defaultEncoding
Uses node's default of utf8
. Optional default encoding to use when no encoding is specified as an argument to stream.write()
. See node's docs for stream.Writable
. Possible values depend on the version of node, and are defined in node's buffer implementation;
.tmpdir
Used node's os.tmpdir
by default. This function returns the directory used by fs-capacitor to store file buffers, and is intended primarily for testing and debugging.
ReadStream
extends stream.Readable
;
.highWaterMark
Uses node's default of 16384
(16kb). Optional value to use as the readable stream's highWaterMark, specifying the number of bytes (for binary data) or characters (for strings) that will be bufferred into memory. See node's docs for stream.Readable
. For the curious, node has a guide on backpressure in streams.
.encoding
Uses node's default of utf8
. Optional encoding to use when the stream's output is desired as a string. See node's docs for stream.Readable
. Possible values depend on the version of node, and are defined in node's buffer implementation.
No vulnerabilities found.
Reason
security policy file detected
Details
Reason
no binaries found in the repo
Reason
no dangerous workflow patterns detected
Reason
0 existing vulnerabilities detected
Reason
license file detected
Details
Reason
SAST tool detected but not run on all commits
Details
Reason
Found 1/10 approved changesets -- score normalized to 1
Reason
0 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 0
Reason
dependency not pinned by hash detected -- score normalized to 0
Details
Reason
detected GitHub workflow tokens with excessive permissions
Details
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
project is not fuzzed
Details
Score
Last Scanned on 2024-11-18
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More