Gathering detailed insights and metrics for @ipld/unixfs
Gathering detailed insights and metrics for @ipld/unixfs
Gathering detailed insights and metrics for @ipld/unixfs
Gathering detailed insights and metrics for @ipld/unixfs
npm install @ipld/unixfs
60.4
Supply Chain
100
Quality
82.8
Maintenance
100
Vulnerability
97.9
License
Module System
Min. Node Version
Typescript Support
Node Version
NPM Version
7 Stars
73 Commits
4 Forks
11 Watching
19 Branches
9 Contributors
Updated on 03 Sept 2024
JavaScript (97.73%)
TypeScript (2.27%)
Cumulative downloads
Total Downloads
Last day
-47.8%
733
Compared to previous day
Last week
-47.1%
5,929
Compared to previous week
Last month
29.2%
32,939
Compared to previous month
Last year
365.1%
295,491
Compared to previous year
An implementation of the UnixFS spec in JavaScript designed for use with multiformats.
This library provides functionality similar to ipfs-unixfs-importer, but it had been designed around different set of use cases:
Writing into Content Addressable Archives (CAR).
In order to allow encoding file(s) into arbitrary number of CARs, library makes no assumbtions about how blocks will be consumed by returning ReadableStream of blocks and leaving it up to caller to handle the rest.
Incremental and resumable writes
Instead of passing a stream of files, user creates files, writes into them and when finished gets Promise<CID>
for it. This removes need for mapping files back to their CIDs streamed on the other end.
Complete control of memory and concurrency
By using writer style API users can choose how many files to write concurrently and change that decision based on other tasks application performs. User can also specify buffer size to be able to tweak read/write coordination.
No indirect configuration
Library removes indirection by taking approach similar to multiformats library. Instead of passing chunker and layout config options, you pass chunker / layout / encoder interface implementations.
You can encode a file as follows
1import * as UnixFS from "@ipld/unixfs"
2
3// Create a redable & writable streams with internal queue that can
4// hold around 32 blocks
5const { readable, writable } = new TransformStream(
6 {},
7 UnixFS.withCapacity(1048576 * 32)
8)
9// Next we create a writer with filesystem like API for encoding files and
10// directories into IPLD blocks that will come out on `readable` end.
11const writer = UnixFS.createWriter({ writable })
12
13// Create file writer that can be used to encode UnixFS file.
14const file = UnixFS.createFileWriter(writer)
15// write some content
16file.write(new TextEncoder().encode("hello world"))
17// Finalize file by closing it.
18const { cid } = await file.close()
19
20// close the writer to close underlying block stream.
21writer.close()
22
23// We could encode all this as car file
24encodeCAR({ roots: [cid], blocks: readable })
You can encode (non sharded) directories with provided API as well
1import * as UnixFS from "@ipld/unixfs" 2 3export const demo = async () => { 4 const { readable, writable } = new TransformStream() 5 const writer = UnixFS.createWriter({ writable }) 6 7 // write a file 8 const file = UnixFS.createFileWriter(writer) 9 file.write(new TextEncoder().encode("hello world")) 10 const fileLink = await file.close() 11 12 // create directory and add a file we encoded above 13 const dir = UnixFS.createDirectoryWriter(writer) 14 dir.set("intro.md", fileLink) 15 const dirLink = await dir.close() 16 17 // now wrap above directory with another and also add the same file 18 // there 19 const root = UnixFS.createDirectoryWriter(fs) 20 root.set("user", dirLink) 21 root.set("hello.md", fileLink) 22 23 // Creates following UnixFS structure where intro.md and hello.md link to same 24 // IPFS file. 25 // ./ 26 // ./user/intro.md 27 // ./hello.md 28 const rootLink = await root.close() 29 // ... 30 writer.close() 31}
You can configure DAG layout, chunking and bunch of other things by providing API compatible components. Library provides bunch of them but you can also bring your own.
1import * as UnixFS from "@ipld/unixfs"
2import * as Rabin from "@ipld/unixfs/file/chunker/rabin"
3import * as Trickle from "@ipld/unixfs/file/layout/trickle"
4import * as RawLeaf from "multiformats/codecs/raw"
5import { sha256 } from "multiformats/hashes/sha2"
6
7const demo = async blob => {
8 const { readable, writable } = new TransformStream()
9 const writer = UnixFS.createWriter({
10 writable,
11 // you can pass only things you want to override
12 settings: {
13 fileChunker: await Rabin.create({
14 avg: 60000,
15 min: 100,
16 max: 662144,
17 }),
18 fileLayout: Trickle.configure({ maxDirectLeaves: 100 }),
19 // Encode leaf nodes as raw blocks
20 fileChunkEncoder: RawLeaf,
21 smallFileEncoder: RawLeaf,
22 fileEncoder: UnixFS,
23 hasher: sha256,
24 },
25 })
26
27 const file = UnixFS.createFileWriter(writer)
28 // ...
29}
Licensed under either of
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
No vulnerabilities found.
No security vulnerabilities found.