Gathering detailed insights and metrics for @supercharge/promise-pool
Gathering detailed insights and metrics for @supercharge/promise-pool
Gathering detailed insights and metrics for @supercharge/promise-pool
Gathering detailed insights and metrics for @supercharge/promise-pool
Map-like, concurrent promise processing
npm install @supercharge/promise-pool
Typescript
Module System
Min. Node Version
Node Version
NPM Version
99.3
Supply Chain
100
Quality
81.5
Maintenance
100
Vulnerability
100
License
JavaScript (52.93%)
TypeScript (47.07%)
Total
36,489,312
Last Day
17,268
Last Week
354,103
Last Month
1,471,031
Last Year
18,437,708
791 Stars
332 Commits
41 Forks
11 Watching
1 Branches
15 Contributors
Latest Version
3.2.0
Package Id
@supercharge/promise-pool@3.2.0
Unpacked Size
52.09 kB
Size
11.51 kB
File Count
19
NPM Version
10.2.4
Node Version
20.11.1
Publised On
25 Mar 2024
Cumulative downloads
Total Downloads
Last day
5%
17,268
Compared to previous day
Last week
1.9%
354,103
Compared to previous week
Last month
-6.2%
1,471,031
Compared to previous month
Last year
74.2%
18,437,708
Compared to previous year
Map-like, concurrent promise processing for Node.js.
Installation · Docs · Usage
Follow @marcuspoehls and @superchargejs for updates!
npm i @supercharge/promise-pool
Using the promise pool is pretty straightforward. The package exposes a class and you can create a promise pool instance using the fluent interface.
Here’s an example using a concurrency of 2:
1import { PromisePool } from '@supercharge/promise-pool' 2 3const users = [ 4 { name: 'Marcus' }, 5 { name: 'Norman' }, 6 { name: 'Christian' } 7] 8 9const { results, errors } = await PromisePool 10 .withConcurrency(2) 11 .for(users) 12 .process(async (userData, index, pool) => { 13 const user = await User.createIfNotExisting(userData) 14 15 return user 16 })
The promise pool uses a default concurrency of 10:
1await PromisePool 2 .for(users) 3 .process(async data => { 4 // processes 10 items in parallel by default 5 })
You can stop the processing of a promise pool using the pool
instance provided to the .process()
and .handleError()
methods. Here’s an example how you can stop an active promise pool from within the .process()
method:
1await PromisePool 2 .for(users) 3 .process(async (user, index, pool) => { 4 if (condition) { 5 return pool.stop() 6 } 7 8 // processes the `user` data 9 })
You may also stop the pool from within the .handleError()
method in case you need to:
1import { PromisePool } from '@supercharge/promise-pool' 2 3await PromisePool 4 .for(users) 5 .handleError(async (error, user, pool) => { 6 if (error instanceof SomethingBadHappenedError) { 7 return pool.stop() 8 } 9 10 // handle the given `error` 11 }) 12 .process(async (user, index, pool) => { 13 // processes the `user` data 14 })
The promise pool allows for custom error handling. You can take over the error handling by implementing an error handler using the .handleError(handler)
.
If you provide an error handler, the promise pool doesn’t collect any errors. You must then collect errors yourself.
Providing a custom error handler allows you to exit the promise pool early by throwing inside the error handler function. Throwing errors is in line with Node.js error handling using async/await.
1import { PromisePool } from '@supercharge/promise-pool' 2 3try { 4 const errors = [] 5 6 const { results } = await PromisePool 7 .for(users) 8 .withConcurrency(4) 9 .handleError(async (error, user) => { 10 if (error instanceof ValidationError) { 11 errors.push(error) // you must collect errors yourself 12 return 13 } 14 15 if (error instanceof ThrottleError) { // Execute error handling on specific errors 16 await retryUser(user) 17 return 18 } 19 20 throw error // Uncaught errors will immediately stop PromisePool 21 }) 22 .process(async data => { 23 // the harder you work for something, 24 // the greater you’ll feel when you achieve it 25 }) 26 27 await handleCollected(errors) // this may throw 28 29 return { results } 30} catch (error) { 31 await handleThrown(error) 32}
You can use the onTaskStarted
and onTaskFinished
methods to hook into the processing of tasks. The provided callback for each method will be called when a task started/finished processing:
1import { PromisePool } from '@supercharge/promise-pool' 2 3await PromisePool 4 .for(users) 5 .onTaskStarted((item, pool) => { 6 console.log(`Progress: ${pool.processedPercentage()}%`) 7 console.log(`Active tasks: ${pool.processedItems().length}`) 8 console.log(`Active tasks: ${pool.activeTasksCount()}`) 9 console.log(`Finished tasks: ${pool.processedItems().length}`) 10 console.log(`Finished tasks: ${pool.processedCount()}`) 11 }) 12 .onTaskFinished((item, pool) => { 13 // update a progress bar or something else :) 14 }) 15 .process(async (user, index, pool) => { 16 // processes the `user` data 17 })
You can also chain multiple onTaskStarted
and onTaskFinished
handling (in case you want to separate some functionality):
1import { PromisePool } from '@supercharge/promise-pool' 2 3await PromisePool 4 .for(users) 5 .onTaskStarted(() => {}) 6 .onTaskStarted(() => {}) 7 .onTaskFinished(() => {}) 8 .onTaskFinished(() => {}) 9 .process(async (user, index, pool) => { 10 // processes the `user` data 11 })
Sometimes it’s useful to configure a timeout in which a task must finish processing. A task that times out is marked as failed. You may use the withTaskTimeout(<milliseconds>)
method to configure a task’s timeout:
1import { PromisePool } from '@supercharge/promise-pool' 2 3await PromisePool 4 .for(users) 5 .withTaskTimeout(2000) // milliseconds 6 .process(async (user, index, pool) => { 7 // processes the `user` data 8 })
Notice: a configured timeout is configured for each task, not for the whole pool. The example configures a 2-second timeout for each task in the pool.
Sometimes you want the processed results to align with your source items. The resulting items should have the same position in the results
array as their related source items. Use the useCorrespondingResults
method to apply this behavior:
1import { setTimeout } from 'node:timers/promises' 2import { PromisePool } from '@supercharge/promise-pool' 3 4const { results } = await PromisePool 5 .for([1, 2, 3]) 6 .withConcurrency(5) 7 .useCorrespondingResults() 8 .process(async (number, index) => { 9 const value = number * 2 10 11 return await setTimeout(10 - index, value) 12 }) 13 14/** 15 * source array: [1, 2, 3] 16 * result array: [2, 4 ,6] 17 * --> result values match the position of their source items 18 */
For example, you may have three items you want to process. Using corresponding results ensures that the processed result for the first item from the source array is located at the first position in the result array (=index 0
). The result for the second item from the source array is placed at the second position in the result array, and so on …
The results
array returned by the promise pool after processing has a mixed return type. Each returned item is one of this type:
Symbol('notRun')
: for tasks that didn’t runSymbol('failed')
: for tasks that failed processingThe PromisePool
exposes both symbols and you may access them using
Symbol('notRun')
: exposed as PromisePool.notRun
Symbol('failed')
: exposed as PromisePool.failed
You may repeat processing for all tasks that didn’t run or failed:
1import { PromisePool } from '@supercharge/promise-pool' 2 3const { results, errors } = await PromisePool 4 .for([1, 2, 3]) 5 .withConcurrency(5) 6 .useCorrespondingResults() 7 .process(async (number) => { 8 // … 9 }) 10 11const itemsNotRun = results.filter(result => { 12 return result === PromisePool.notRun 13}) 14 15const failedItems = results.filter(result => { 16 return result === PromisePool.failed 17})
When using corresponding results, you need to go through the errors
array yourself. The default error handling (collect errors) stays the same and you can follow the described error handling section above.
git checkout -b my-feature
git commit -am 'Add some feature'
git push origin my-new-feature
MIT © Supercharge
superchargejs.com · GitHub @supercharge · Twitter @superchargejs
No vulnerabilities found.
Reason
no dangerous workflow patterns detected
Reason
no binaries found in the repo
Reason
license file detected
Details
Reason
0 existing vulnerabilities detected
Reason
0 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 0
Reason
Found 2/28 approved changesets -- score normalized to 0
Reason
dependency not pinned by hash detected -- score normalized to 0
Details
Reason
detected GitHub workflow tokens with excessive permissions
Details
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
project is not fuzzed
Details
Reason
branch protection not enabled on development/release branches
Details
Reason
security policy file not detected
Details
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
Score
Last Scanned on 2024-11-25
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More