Gathering detailed insights and metrics for @henrygd/queue
Gathering detailed insights and metrics for @henrygd/queue
Gathering detailed insights and metrics for @henrygd/queue
Gathering detailed insights and metrics for @henrygd/queue
Tiny async queue with concurrency control. Like p-limit or fastq, but smaller and faster
npm install @henrygd/queue
75.1
Supply Chain
98.9
Quality
85
Maintenance
100
Vulnerability
100
License
Module System
Min. Node Version
Typescript Support
Node Version
NPM Version
72 Stars
69 Commits
2 Forks
1 Watching
2 Branches
1 Contributors
Updated on 25 Nov 2024
TypeScript (79.39%)
JavaScript (20.61%)
Cumulative downloads
Total Downloads
Last day
-47.2%
95
Compared to previous day
Last week
-20.5%
561
Compared to previous week
Last month
88.4%
2,698
Compared to previous month
Last year
0%
7,461
Compared to previous year
9
Tiny async queue with concurrency control. Like p-limit
or fastq
, but smaller and faster. See comparisons and benchmarks below.
Works with:
Create a queue with the newQueue
function. Then add async functions - or promise returning functions - to your queue with the add
method.
You can use queue.done()
to wait for the queue to be empty.
1import { newQueue } from '@henrygd/queue' 2 3// create a new queue with a concurrency of 2 4const queue = newQueue(2) 5 6const pokemon = ['ditto', 'hitmonlee', 'pidgeot', 'poliwhirl', 'golem', 'charizard'] 7 8for (const name of pokemon) { 9 queue.add(async () => { 10 const res = await fetch(`https://pokeapi.co/api/v2/pokemon/${name}`) 11 const json = await res.json() 12 console.log(`${json.name}: ${json.height * 10}cm | ${json.weight / 10}kg`) 13 }) 14} 15 16console.log('running') 17await queue.done() 18console.log('done')
The return value of queue.add
is the same as the return value of the supplied function.
1const response = await queue.add(() => fetch('https://pokeapi.co/api/v2/pokemon')) 2console.log(response.ok, response.status, response.headers)
[!TIP] If you need support for Node's AsyncLocalStorage, import
@henrygd/queue/async-storage
instead.
1/** Add an async function / promise wrapper to the queue */ 2queue.add<T>(promiseFunction: () => PromiseLike<T>): Promise<T> 3/** Returns a promise that resolves when the queue is empty */ 4queue.done(): Promise<void> 5/** Empties the queue (active promises are not cancelled) */ 6queue.clear(): void 7/** Returns the number of promises currently running */ 8queue.active(): number 9/** Returns the total number of promises in the queue */ 10queue.size(): number
Library | Version | Bundle size (B) | Weekly downloads |
---|---|---|---|
@henrygd/queue | 1.0.6 | 355 | dozens :) |
p-limit | 5.0.0 | 1,763 | 118,953,973 |
async.queue | 3.2.5 | 6,873 | 53,645,627 |
fastq | 1.17.1 | 3,050 | 39,257,355 |
queue | 7.0.0 | 2,840 | 4,259,101 |
promise-queue | 2.2.5 | 2,200 | 1,092,431 |
All libraries run the exact same test. Each operation measures how quickly the queue can resolve 1,000 async functions. The function just increments a counter and checks if it has reached 1,000.1
We check for completion inside the function so that promise-queue
and p-limit
are not penalized by having to use Promise.all
(they don't provide a promise that resolves when the queue is empty).
This test was run in Chromium. Chrome and Edge are the same. Firefox and Safari are slower and closer, with @henrygd/queue
just edging out promise-queue
. I think both are hitting the upper limit of what those browsers will allow.
You can run or tweak for yourself here: https://jsbm.dev/TKyOdie0sbpOh
Note:
p-limit
6.1.0 now places betweenasync.queue
andqueue
in Node and Deno.
Ryzen 5 4500U | 8GB RAM | Node 22.3.0
Ryzen 7 6800H | 32GB RAM | Node 22.3.0
Note:
p-limit
6.1.0 now places betweenasync.queue
andqueue
in Node and Deno.
Ryzen 5 4500U | 8GB RAM | Deno 1.44.4
Ryzen 7 6800H | 32GB RAM | Deno 1.44.4
Ryzen 5 4500U | 8GB RAM | Bun 1.1.17
Ryzen 7 6800H | 32GB RAM | Bun 1.1.17
Uses oha to make 1,000 requests to each worker. Each request creates a queue and resolves 5,000 functions.
This was run locally using Wrangler on a Ryzen 7 6800H laptop. Wrangler uses the same workerd runtime as workers deployed to Cloudflare, so the relative difference should be accurate. Here's the repository for this benchmark.
Library | Requests/sec | Total (sec) | Average | Slowest |
---|---|---|---|---|
@henrygd/queue | 816.1074 | 1.2253 | 0.0602 | 0.0864 |
promise-queue | 647.2809 | 1.5449 | 0.0759 | 0.1149 |
fastq | 336.7031 | 3.0877 | 0.1459 | 0.2080 |
async.queue | 198.9986 | 5.0252 | 0.2468 | 0.3544 |
queue | 85.6483 | 11.6757 | 0.5732 | 0.7629 |
p-limit | 77.7434 | 12.8628 | 0.6316 | 0.9585 |
@henrygd/semaphore
- Fastest javascript inline semaphores and mutexes using async / await.
In reality, you may not be running so many jobs at once, and your jobs will take much longer to resolve. So performance will depend more on the jobs themselves. ↩
No vulnerabilities found.
No security vulnerabilities found.