Installations
npm install @maselious/bottleneck
Developer
SGrondin
Developer Guide
Module System
CommonJS
Min. Node Version
Typescript Support
No
Node Version
18.13.0
NPM Version
8.19.3
Statistics
1,838 Stars
333 Commits
81 Forks
22 Watching
10 Branches
14 Contributors
Updated on 27 Nov 2024
Bundle Size
59.98 kB
Minified
13.87 kB
Minified + Gzipped
Languages
JavaScript (82.22%)
CoffeeScript (8.04%)
HTML (4.57%)
Lua (3.34%)
TypeScript (1.51%)
Shell (0.33%)
Total Downloads
Cumulative downloads
Total Downloads
77,001
Last day
-9.9%
991
Compared to previous day
Last week
97.7%
10,919
Compared to previous week
Last month
262%
25,987
Compared to previous month
Last year
26,443.9%
76,712
Compared to previous year
Daily Downloads
Weekly Downloads
Monthly Downloads
Yearly Downloads
bottleneck
Bottleneck is a lightweight and zero-dependency Task Scheduler and Rate Limiter for Node.js and the browser.
Bottleneck is an easy solution as it adds very little complexity to your code. It is battle-hardened, reliable and production-ready and used on a large scale in private companies and open source software.
It supports Clustering: it can rate limit jobs across multiple Node.js instances. It uses Redis and strictly atomic operations to stay reliable in the presence of unreliable clients and networks. It also supports Redis Cluster and Redis Sentinel.
- Install
- Quick Start
- Constructor
- Reservoir Intervals
submit()
schedule()
wrap()
- Job Options
- Jobs Lifecycle
- Events
- Retries
updateSettings()
incrementReservoir()
currentReservoir()
stop()
chain()
- Group
- Batching
- Clustering
- Debugging Your Application
- Upgrading To v2
- Contributing
Install
npm install --save bottleneck
1import Bottleneck from "bottleneck"; 2 3// Note: To support older browsers and Node <6.0, you must import the ES5 bundle instead. 4var Bottleneck = require("bottleneck/es5");
Quick Start
Step 1 of 3
Most APIs have a rate limit. For example, to execute 3 requests per second:
1const limiter = new Bottleneck({ 2 minTime: 333 3});
If there's a chance some requests might take longer than 333ms and you want to prevent more than 1 request from running at a time, add maxConcurrent: 1
:
1const limiter = new Bottleneck({
2 maxConcurrent: 1,
3 minTime: 333
4});
minTime
and maxConcurrent
are enough for the majority of use cases. They work well together to ensure a smooth rate of requests. If your use case requires executing requests in bursts or every time a quota resets, look into Reservoir Intervals.
Step 2 of 3
➤ Using promises?
Instead of this:
1myFunction(arg1, arg2) 2.then((result) => { 3 /* handle result */ 4});
Do this:
1limiter.schedule(() => myFunction(arg1, arg2)) 2.then((result) => { 3 /* handle result */ 4});
Or this:
1const wrapped = limiter.wrap(myFunction); 2 3wrapped(arg1, arg2) 4.then((result) => { 5 /* handle result */ 6});
➤ Using async/await?
Instead of this:
1const result = await myFunction(arg1, arg2);
Do this:
1const result = await limiter.schedule(() => myFunction(arg1, arg2));
Or this:
1const wrapped = limiter.wrap(myFunction); 2 3const result = await wrapped(arg1, arg2);
➤ Using callbacks?
Instead of this:
1someAsyncCall(arg1, arg2, callback);
Do this:
1limiter.submit(someAsyncCall, arg1, arg2, callback);
Step 3 of 3
Remember...
Bottleneck builds a queue of jobs and executes them as soon as possible. By default, the jobs will be executed in the order they were received.
Read the 'Gotchas' and you're good to go. Or keep reading to learn about all the fine tuning and advanced options available. If your rate limits need to be enforced across a cluster of computers, read the Clustering docs.
Need help debugging your application?
Instead of throttling maybe you want to batch up requests into fewer calls?
Gotchas & Common Mistakes
- Make sure the function you pass to
schedule()
orwrap()
only returns once all the work it does has completed.
Instead of this:
1limiter.schedule(() => { 2 tasksArray.forEach(x => processTask(x)); 3 // BAD, we return before our processTask() functions are finished processing! 4});
Do this:
1limiter.schedule(() => { 2 const allTasks = tasksArray.map(x => processTask(x)); 3 // GOOD, we wait until all tasks are done. 4 return Promise.all(allTasks); 5});
- If you're passing an object's method as a job, you'll probably need to
bind()
the object:
1// instead of this: 2limiter.schedule(object.doSomething); 3// do this: 4limiter.schedule(object.doSomething.bind(object)); 5// or, wrap it in an arrow function instead: 6limiter.schedule(() => object.doSomething());
-
Bottleneck requires Node 6+ to function. However, an ES5 build is included:
var Bottleneck = require("bottleneck/es5");
. -
Make sure you're catching
"error"
events emitted by your limiters! -
Consider setting a
maxConcurrent
value instead of leaving itnull
. This can help your application's performance, especially if you think the limiter's queue might become very long. -
If you plan on using
priorities
, make sure to set amaxConcurrent
value. -
When using
submit()
, if a callback isn't necessary, you must passnull
or an empty function instead. It will not work otherwise. -
When using
submit()
, make sure all the jobs will eventually complete by calling their callback, or set anexpiration
. Even if you submitted your job with anull
callback , it still needs to call its callback. This is particularly important if you are using amaxConcurrent
value that isn'tnull
(unlimited), otherwise those not completed jobs will be clogging up the limiter and no new jobs will be allowed to run. It's safe to call the callback more than once, subsequent calls are ignored. -
Using tools like
mockdate
in your tests to change time in JavaScript will likely result in undefined behavior from Bottleneck.
Docs
Constructor
1const limiter = new Bottleneck({/* options */});
Basic options:
Option | Default | Description |
---|---|---|
maxConcurrent | null (unlimited) | How many jobs can be executing at the same time. Consider setting a value instead of leaving it null , it can help your application's performance, especially if you think the limiter's queue might get very long. |
minTime | 0 ms | How long to wait after launching a job before launching another one. |
highWater | null (unlimited) | How long can the queue be? When the queue length exceeds that value, the selected strategy is executed to shed the load. |
strategy | Bottleneck.strategy.LEAK | Which strategy to use when the queue gets longer than the high water mark. Read about strategies. Strategies are never executed if highWater is null . |
penalty | 15 * minTime , or 5000 when minTime is 0 | The penalty value used by the BLOCK strategy. |
reservoir | null (unlimited) | How many jobs can be executed before the limiter stops executing jobs. If reservoir reaches 0 , no jobs will be executed until it is no longer 0 . New jobs will still be queued up. |
reservoirRefreshInterval | null (disabled) | Every reservoirRefreshInterval milliseconds, the reservoir value will be automatically updated to the value of reservoirRefreshAmount . The reservoirRefreshInterval value should be a multiple of 250 (5000 for Clustering). |
reservoirRefreshAmount | null (disabled) | The value to set reservoir to when reservoirRefreshInterval is in use. |
reservoirIncreaseInterval | null (disabled) | Every reservoirIncreaseInterval milliseconds, the reservoir value will be automatically incremented by reservoirIncreaseAmount . The reservoirIncreaseInterval value should be a multiple of 250 (5000 for Clustering). |
reservoirIncreaseAmount | null (disabled) | The increment applied to reservoir when reservoirIncreaseInterval is in use. |
reservoirIncreaseMaximum | null (disabled) | The maximum value that reservoir can reach when reservoirIncreaseInterval is in use. |
Promise | Promise (built-in) | This lets you override the Promise library used by Bottleneck. |
Reservoir Intervals
Reservoir Intervals let you execute requests in bursts, by automatically controlling the limiter's reservoir
value. The reservoir
is simply the number of jobs the limiter is allowed to execute. Once the value reaches 0, it stops starting new jobs.
There are 2 types of Reservoir Intervals: Refresh Intervals and Increase Intervals.
Refresh Interval
In this example, we throttle to 100 requests every 60 seconds:
1const limiter = new Bottleneck({
2 reservoir: 100, // initial value
3 reservoirRefreshAmount: 100,
4 reservoirRefreshInterval: 60 * 1000, // must be divisible by 250
5
6 // also use maxConcurrent and/or minTime for safety
7 maxConcurrent: 1,
8 minTime: 333 // pick a value that makes sense for your use case
9});
reservoir
is a counter decremented every time a job is launched, we set its initial value to 100. Then, every reservoirRefreshInterval
(60000 ms), reservoir
is automatically updated to be equal to the reservoirRefreshAmount
(100).
Increase Interval
In this example, we throttle jobs to meet the Shopify API Rate Limits. Users are allowed to send 40 requests initially, then every second grants 2 more requests up to a maximum of 40.
1const limiter = new Bottleneck({
2 reservoir: 40, // initial value
3 reservoirIncreaseAmount: 2,
4 reservoirIncreaseInterval: 1000, // must be divisible by 250
5 reservoirIncreaseMaximum: 40,
6
7 // also use maxConcurrent and/or minTime for safety
8 maxConcurrent: 5,
9 minTime: 250 // pick a value that makes sense for your use case
10});
Warnings
Reservoir Intervals are an advanced feature, please take the time to read and understand the following warnings.
-
Reservoir Intervals are not a replacement for
minTime
andmaxConcurrent
. It's strongly recommended to also useminTime
and/ormaxConcurrent
to spread out the load. For example, suppose a lot of jobs are queued up because thereservoir
is 0. Every time the Refresh Interval is triggered, a number of jobs equal toreservoirRefreshAmount
will automatically be launched, all at the same time! To prevent this flooding effect and keep your application running smoothly, useminTime
andmaxConcurrent
to stagger the jobs. -
The Reservoir Interval starts from the moment the limiter is created. Let's suppose we're using
reservoirRefreshAmount: 5
. If you happen to add 10 jobs just 1ms before the refresh is triggered, the first 5 will run immediately, then 1ms later it will refresh the reservoir value and that will make the last 5 also run right away. It will have run 10 jobs in just over 1ms no matter what your reservoir interval was! -
Reservoir Intervals prevent a limiter from being garbage collected. Call
limiter.disconnect()
to clear the interval and allow the memory to be freed. However, it's not necessary to call.disconnect()
to allow the Node.js process to exit.
submit()
Adds a job to the queue. This is the callback version of schedule()
.
1limiter.submit(someAsyncCall, arg1, arg2, callback);
You can pass null
instead of an empty function if there is no callback, but someAsyncCall
still needs to call its callback to let the limiter know it has completed its work.
submit()
can also accept advanced options.
schedule()
Adds a job to the queue. This is the Promise and async/await version of submit()
.
1const fn = function(arg1, arg2) { 2 return httpGet(arg1, arg2); // Here httpGet() returns a promise 3}; 4 5limiter.schedule(fn, arg1, arg2) 6.then((result) => { 7 /* ... */ 8});
In other words, schedule()
takes a function fn and a list of arguments. schedule()
returns a promise that will be executed according to the rate limits.
schedule()
can also accept advanced options.
Here's another example:
1// suppose that `client.get(url)` returns a promise 2 3const url = "https://wikipedia.org"; 4 5limiter.schedule(() => client.get(url)) 6.then(response => console.log(response.body));
wrap()
Takes a function that returns a promise. Returns a function identical to the original, but rate limited.
1const wrapped = limiter.wrap(fn); 2 3wrapped() 4.then(function (result) { 5 /* ... */ 6}) 7.catch(function (error) { 8 // Bottleneck might need to fail the job even if the original function can never fail. 9 // For example, your job is taking longer than the `expiration` time you've set. 10});
Job Options
submit()
, schedule()
, and wrap()
all accept advanced options.
1// Submit 2limiter.submit({/* options */}, someAsyncCall, arg1, arg2, callback); 3 4// Schedule 5limiter.schedule({/* options */}, fn, arg1, arg2); 6 7// Wrap 8const wrapped = limiter.wrap(fn); 9wrapped.withOptions({/* options */}, arg1, arg2);
Option | Default | Description |
---|---|---|
priority | 5 | A priority between 0 and 9 . A job with a priority of 4 will be queued ahead of a job with a priority of 5 . Important: You must set a low maxConcurrent value for priorities to work, otherwise there is nothing to queue because jobs will be be scheduled immediately! |
weight | 1 | Must be an integer equal to or higher than 0 . The weight is what increases the number of running jobs (up to maxConcurrent ) and decreases the reservoir value. |
expiration | null (unlimited) | The number of milliseconds a job is given to complete. Jobs that execute for longer than expiration ms will be failed with a BottleneckError . |
id | <no-id> | You should give an ID to your jobs, it helps with debugging. |
Strategies
A strategy is a simple algorithm that is executed every time adding a job would cause the number of queued jobs to exceed highWater
. Strategies are never executed if highWater
is null
.
Bottleneck.strategy.LEAK
When adding a new job to a limiter, if the queue length reaches highWater
, drop the oldest job with the lowest priority. This is useful when jobs that have been waiting for too long are not important anymore. If all the queued jobs are more important (based on their priority
value) than the one being added, it will not be added.
Bottleneck.strategy.OVERFLOW_PRIORITY
Same as LEAK
, except it will only drop jobs that are less important than the one being added. If all the queued jobs are as or more important than the new one, it will not be added.
Bottleneck.strategy.OVERFLOW
When adding a new job to a limiter, if the queue length reaches highWater
, do not add the new job. This strategy totally ignores priority levels.
Bottleneck.strategy.BLOCK
When adding a new job to a limiter, if the queue length reaches highWater
, the limiter falls into "blocked mode". All queued jobs are dropped and no new jobs will be accepted until the limiter unblocks. It will unblock after penalty
milliseconds have passed without receiving a new job. penalty
is equal to 15 * minTime
(or 5000
if minTime
is 0
) by default. This strategy is ideal when bruteforce attacks are to be expected. This strategy totally ignores priority levels.
Jobs lifecycle
- Received. Your new job has been added to the limiter. Bottleneck needs to check whether it can be accepted into the queue.
- Queued. Bottleneck has accepted your job, but it can not tell at what exact timestamp it will run yet, because it is dependent on previous jobs.
- Running. Your job is not in the queue anymore, it will be executed after a delay that was computed according to your
minTime
setting. - Executing. Your job is executing its code.
- Done. Your job has completed.
Note: By default, Bottleneck does not keep track of DONE jobs, to save memory. You can enable this feature by passing trackDoneStatus: true
as an option when creating a limiter.
counts()
1const counts = limiter.counts(); 2 3console.log(counts); 4/* 5{ 6 RECEIVED: 0, 7 QUEUED: 0, 8 RUNNING: 0, 9 EXECUTING: 0, 10 DONE: 0 11} 12*/
Returns an object with the current number of jobs per status in the limiter.
jobStatus()
1console.log(limiter.jobStatus("some-job-id")); 2// Example: QUEUED
Returns the status of the job with the provided job id in the limiter. Returns null
if no job with that id exist.
jobs()
1console.log(limiter.jobs("RUNNING")); 2// Example: ['id1', 'id2']
Returns an array of all the job ids with the specified status in the limiter. Not passing a status string returns all the known ids.
queued()
1const count = limiter.queued(priority); 2 3console.log(count);
priority
is optional. Returns the number of QUEUED
jobs with the given priority
level. Omitting the priority
argument returns the total number of queued jobs in the limiter.
clusterQueued()
1const count = await limiter.clusterQueued(); 2 3console.log(count);
Returns the number of QUEUED
jobs in the Cluster.
empty()
1if (limiter.empty()) { 2 // do something... 3}
Returns a boolean which indicates whether there are any RECEIVED
or QUEUED
jobs in the limiter.
running()
1limiter.running() 2.then((count) => console.log(count));
Returns a promise that returns the total weight of the RUNNING
and EXECUTING
jobs in the Cluster.
done()
1limiter.done() 2.then((count) => console.log(count));
Returns a promise that returns the total weight of DONE
jobs in the Cluster. Does not require passing the trackDoneStatus: true
option.
check()
1limiter.check() 2.then((wouldRunNow) => console.log(wouldRunNow));
Checks if a new job would be executed immediately if it was submitted now. Returns a promise that returns a boolean.
Events
'error'
1limiter.on("error", function (error) { 2 /* handle errors here */ 3});
The two main causes of error events are: uncaught exceptions in your event handlers, and network errors when Clustering is enabled.
'failed'
1limiter.on("failed", function (error, jobInfo) { 2 // This will be called every time a job fails. 3});
'retry'
See Retries to learn how to automatically retry jobs.
1limiter.on("retry", function (message, jobInfo) { 2 // This will be called every time a job is retried. 3});
'empty'
1limiter.on("empty", function () { 2 // This will be called when `limiter.empty()` becomes true. 3});
'idle'
1limiter.on("idle", function () { 2 // This will be called when `limiter.empty()` is `true` and `limiter.running()` is `0`. 3});
'dropped'
1limiter.on("dropped", function (dropped) { 2 // This will be called when a strategy was triggered. 3 // The dropped request is passed to this event listener. 4});
'depleted'
1limiter.on("depleted", function (empty) { 2 // This will be called every time the reservoir drops to 0. 3 // The `empty` (boolean) argument indicates whether `limiter.empty()` is currently true. 4});
'debug'
1limiter.on("debug", function (message, data) { 2 // Useful to figure out what the limiter is doing in real time 3 // and to help debug your application 4});
'received' 'queued' 'scheduled' 'executing' 'done'
1limiter.on("queued", function (info) { 2 // This event is triggered when a job transitions from one Lifecycle stage to another 3});
See Jobs Lifecycle for more information.
These Lifecycle events are not triggered for jobs located on another limiter in a Cluster, for performance reasons.
Other event methods
Use removeAllListeners()
with an optional event name as first argument to remove listeners.
Use .once()
instead of .on()
to only receive a single event.
Retries
The following example:
1const limiter = new Bottleneck(); 2 3// Listen to the "failed" event 4limiter.on("failed", async (error, jobInfo) => { 5 const id = jobInfo.options.id; 6 console.warn(`Job ${id} failed: ${error}`); 7 8 if (jobInfo.retryCount === 0) { // Here we only retry once 9 console.log(`Retrying job ${id} in 25ms!`); 10 return 25; 11 } 12}); 13 14// Listen to the "retry" event 15limiter.on("retry", (error, jobInfo) => console.log(`Now retrying ${jobInfo.options.id}`)); 16 17const main = async function () { 18 let executions = 0; 19 20 // Schedule one job 21 const result = await limiter.schedule({ id: 'ABC123' }, async () => { 22 executions++; 23 if (executions === 1) { 24 throw new Error("Boom!"); 25 } else { 26 return "Success!"; 27 } 28 }); 29 30 console.log(`Result: ${result}`); 31} 32 33main();
will output
Job ABC123 failed: Error: Boom!
Retrying job ABC123 in 25ms!
Now retrying ABC123
Result: Success!
To re-run your job, simply return an integer from the 'failed'
event handler. The number returned is how many milliseconds to wait before retrying it. Return 0
to retry it immediately.
IMPORTANT: When you ask the limiter to retry a job it will not send it back into the queue. It will stay in the EXECUTING
state until it succeeds or until you stop retrying it. This means that it counts as a concurrent job for maxConcurrent
even while it's just waiting to be retried. The number of milliseconds to wait ignores your minTime
settings.
updateSettings()
1limiter.updateSettings(options);
The options are the same as the limiter constructor.
Note: Changes don't affect SCHEDULED
jobs.
incrementReservoir()
1limiter.incrementReservoir(incrementBy);
Returns a promise that returns the new reservoir value.
currentReservoir()
1limiter.currentReservoir() 2.then((reservoir) => console.log(reservoir));
Returns a promise that returns the current reservoir value.
stop()
The stop()
method is used to safely shutdown a limiter. It prevents any new jobs from being added to the limiter and waits for all EXECUTING
jobs to complete.
1limiter.stop(options) 2.then(() => { 3 console.log("Shutdown completed!") 4});
stop()
returns a promise that resolves once all the EXECUTING
jobs have completed and, if desired, once all non-EXECUTING
jobs have been dropped.
Option | Default | Description |
---|---|---|
dropWaitingJobs | true | When true , drop all the RECEIVED , QUEUED and RUNNING jobs. When false , allow those jobs to complete before resolving the Promise returned by this method. |
dropErrorMessage | This limiter has been stopped. | The error message used to drop jobs when dropWaitingJobs is true . |
enqueueErrorMessage | This limiter has been stopped and cannot accept new jobs. | The error message used to reject a job added to the limiter after stop() has been called. |
chain()
Tasks that are ready to be executed will be added to that other limiter. Suppose you have 2 types of tasks, A and B. They both have their own limiter with their own settings, but both must also follow a global limiter G:
1const limiterA = new Bottleneck( /* some settings */ );
2const limiterB = new Bottleneck( /* some different settings */ );
3const limiterG = new Bottleneck( /* some global settings */ );
4
5limiterA.chain(limiterG);
6limiterB.chain(limiterG);
7
8// Requests added to limiterA must follow the A and G rate limits.
9// Requests added to limiterB must follow the B and G rate limits.
10// Requests added to limiterG must follow the G rate limits.
To unchain, call limiter.chain(null);
.
Group
The Group
feature of Bottleneck manages many limiters automatically for you. It creates limiters dynamically and transparently.
Let's take a DNS server as an example of how Bottleneck can be used. It's a service that sees a lot of abuse and where incoming DNS requests need to be rate limited. Bottleneck is so tiny, it's acceptable to create one limiter for each origin IP, even if it means creating thousands of limiters. The Group
feature is perfect for this use case. Create one Group and use the origin IP to rate limit each IP independently. Each call with the same key (IP) will be routed to the same underlying limiter. A Group is created like a limiter:
1const group = new Bottleneck.Group(options);
The options
object will be used for every limiter created by the Group.
The Group is then used with the .key(str)
method:
1// In this example, the key is an IP 2group.key("77.66.54.32").schedule(() => { 3 /* process the request */ 4});
key()
str
: The key to use. All jobs added with the same key will use the same underlying limiter. Default:""
The return value of .key(str)
is a limiter. If it doesn't already exist, it is generated for you. Calling key()
is how limiters are created inside a Group.
Limiters that have been idle for longer than 5 minutes are deleted to avoid memory leaks, this value can be changed by passing a different timeout
option, in milliseconds.
on("created")
1group.on("created", (limiter, key) => { 2 console.log("A new limiter was created for key: " + key) 3 4 // Prepare the limiter, for example we'll want to listen to its "error" events! 5 limiter.on("error", (err) => { 6 // Handle errors here 7 }) 8});
Listening for the "created"
event is the recommended way to set up a new limiter. Your event handler is executed before key()
returns the newly created limiter.
updateSettings()
1const group = new Bottleneck.Group({ maxConcurrent: 2, minTime: 250 }); 2group.updateSettings({ minTime: 500 });
After executing the above commands, new limiters will be created with { maxConcurrent: 2, minTime: 500 }
.
deleteKey()
str
: The key for the limiter to delete.
Manually deletes the limiter at the specified key. When using Clustering, the Redis data is immediately deleted and the other Groups in the Cluster will eventually delete their local key automatically, unless it is still being used.
keys()
Returns an array containing all the keys in the Group.
clusterKeys()
Same as group.keys()
, but returns all keys in this Group ID across the Cluster.
limiters()
1const limiters = group.limiters(); 2 3console.log(limiters); 4// [ { key: "some key", limiter: <limiter> }, { key: "some other key", limiter: <some other limiter> } ]
Batching
Some APIs can accept multiple operations in a single call. Bottleneck's Batching feature helps you take advantage of those APIs:
1const batcher = new Bottleneck.Batcher({ 2 maxTime: 1000, 3 maxSize: 10 4}); 5 6batcher.on("batch", (batch) => { 7 console.log(batch); // ["some-data", "some-other-data"] 8 9 // Handle batch here 10}); 11 12batcher.add("some-data"); 13batcher.add("some-other-data");
batcher.add()
returns a Promise that resolves once the request has been flushed to a "batch"
event.
Option | Default | Description |
---|---|---|
maxTime | null (unlimited) | Maximum acceptable time (in milliseconds) a request can have to wait before being flushed to the "batch" event. |
maxSize | null (unlimited) | Maximum number of requests in a batch. |
Batching doesn't throttle requests, it only groups them up optimally according to your maxTime
and maxSize
settings.
Clustering
Clustering lets many limiters access the same shared state, stored in Redis. Changes to the state are Atomic, Consistent and Isolated (and fully ACID with the right Durability configuration), to eliminate any chances of race conditions or state corruption. Your settings, such as maxConcurrent
, minTime
, etc., are shared across the whole cluster, which means —for example— that { maxConcurrent: 5 }
guarantees no more than 5 jobs can ever run at a time in the entire cluster of limiters. 100% of Bottleneck's features are supported in Clustering mode. Enabling Clustering is as simple as changing a few settings. It's also a convenient way to store or export state for later use.
Bottleneck will attempt to spread load evenly across limiters.
Enabling Clustering
First, add redis
or ioredis
to your application's dependencies:
1# NodeRedis (https://github.com/NodeRedis/node_redis) 2npm install --save redis 3 4# or ioredis (https://github.com/luin/ioredis) 5npm install --save ioredis
Then create a limiter or a Group:
1const limiter = new Bottleneck({
2 /* Some basic options */
3 maxConcurrent: 5,
4 minTime: 500
5 id: "my-super-app" // All limiters with the same id will be clustered together
6
7 /* Clustering options */
8 datastore: "redis", // or "ioredis"
9 clearDatastore: false,
10 clientOptions: {
11 host: "127.0.0.1",
12 port: 6379
13
14 // Redis client options
15 // Using NodeRedis? See https://github.com/NodeRedis/node_redis#options-object-properties
16 // Using ioredis? See https://github.com/luin/ioredis/blob/master/API.md#new-redisport-host-options
17 }
18});
Option | Default | Description |
---|---|---|
datastore | "local" | Where the limiter stores its internal state. The default ("local" ) keeps the state in the limiter itself. Set it to "redis" or "ioredis" to enable Clustering. |
clearDatastore | false | When set to true , on initial startup, the limiter will wipe any existing Bottleneck state data on the Redis db. |
clientOptions | {} | This object is passed directly to the redis client library you've selected. |
clusterNodes | null | ioredis only. When clusterNodes is not null, the client will be instantiated by calling new Redis.Cluster(clusterNodes, clientOptions) instead of new Redis(clientOptions) . |
timeout | null (no TTL) | The Redis TTL in milliseconds (TTL) for the keys created by the limiter. When timeout is set, the limiter's state will be automatically removed from Redis after timeout milliseconds of inactivity. |
Redis | null | Overrides the import/require of the redis/ioredis library. You shouldn't need to set this option unless your application is failing to start due to a failure to require/import the client library. |
Note: When using Groups, the timeout
option has a default of 300000
milliseconds and the generated limiters automatically receive an id
with the pattern ${group.id}-${KEY}
.
Note: If you are seeing a runtime error due to the require()
function not being able to load redis
/ioredis
, then directly pass the module as the Redis
option. Example:
1import Redis from "ioredis" 2 3const limiter = new Bottleneck({ 4 id: "my-super-app", 5 datastore: "ioredis", 6 clientOptions: { host: '12.34.56.78', port: 6379 }, 7 Redis 8});
Unfortunately, this is a side effect of having to disable inlining, which is necessary to make Bottleneck easy to use in the browser.
Important considerations when Clustering
The first limiter connecting to Redis will store its constructor options on Redis and all subsequent limiters will be using those settings. You can alter the constructor options used by all the connected limiters by calling updateSettings()
. The clearDatastore
option instructs a new limiter to wipe any previous Bottleneck data (for that id
), including previously stored settings.
Queued jobs are NOT stored on Redis. They are local to each limiter. Exiting the Node.js process will lose those jobs. This is because Bottleneck has no way to propagate the JS code to run a job across a different Node.js process than the one it originated on. Bottleneck doesn't keep track of the queue contents of the limiters on a cluster for performance and reliability reasons. You can use something like BeeQueue
in addition to Bottleneck to get around this limitation.
Due to the above, functionality relying on the queue length happens purely locally:
- Priorities are local. A higher priority job will run before a lower priority job on the same limiter. Another limiter on the cluster might run a lower priority job before our higher priority one.
- Assuming constant priority levels, Bottleneck guarantees that jobs will be run in the order they were received on the same limiter. Another limiter on the cluster might run a job received later before ours runs.
highWater
and load shedding (strategies) are per limiter. However, one limiter entering Blocked mode will put the entire cluster in Blocked mode untilpenalty
milliseconds have passed. See Strategies.- The
"empty"
event is triggered when the (local) queue is empty. - The
"idle"
event is triggered when the (local) queue is empty and no jobs are currently running anywhere in the cluster.
You must work around these limitations in your application code if they are an issue to you. The publish()
method could be useful here.
The current design guarantees reliability, is highly performant and lets limiters come and go. Your application can scale up or down, and clients can be disconnected at any time without issues.
It is strongly recommended that you give an id
to every limiter and Group since it is used to build the name of your limiter's Redis keys! Limiters with the same id
inside the same Redis db will be sharing the same datastore.
It is strongly recommended that you set an expiration
(See Job Options) on every job, since that lets the cluster recover from crashed or disconnected clients. Otherwise, a client crashing while executing a job would not be able to tell the cluster to decrease its number of "running" jobs. By using expirations, those lost jobs are automatically cleared after the specified time has passed. Using expirations is essential to keeping a cluster reliable in the face of unpredictable application bugs, network hiccups, and so on.
Network latency between Node.js and Redis is not taken into account when calculating timings (such as minTime
). To minimize the impact of latency, Bottleneck only performs a single Redis call per lifecycle transition. Keeping the Redis server close to your limiters will help you get a more consistent experience. Keeping the system time consistent across all clients will also help.
It is strongly recommended to set up an "error"
listener on all your limiters and on your Groups.
Clustering Methods
The ready()
, publish()
and clients()
methods also exist when using the local
datastore, for code compatibility reasons: code written for redis
/ioredis
won't break with local
.
ready()
This method returns a promise that resolves once the limiter is connected to Redis.
As of v2.9.0, it's no longer necessary to wait for .ready()
to resolve before issuing commands to a limiter. The commands will be queued until the limiter successfully connects. Make sure to listen to the "error"
event to handle connection errors.
1const limiter = new Bottleneck({/* options */}); 2 3limiter.on("error", (err) => { 4 // handle network errors 5}); 6 7limiter.ready() 8.then(() => { 9 // The limiter is ready 10});
publish(message)
This method broadcasts the message
string to every limiter in the Cluster. It returns a promise.
1const limiter = new Bottleneck({/* options */}); 2 3limiter.on("message", (msg) => { 4 console.log(msg); // prints "this is a string" 5}); 6 7limiter.publish("this is a string");
To send objects, stringify them first:
1limiter.on("message", (msg) => { 2 console.log(JSON.parse(msg).hello) // prints "world" 3}); 4 5limiter.publish(JSON.stringify({ hello: "world" }));
clients()
If you need direct access to the redis clients, use .clients()
:
1console.log(limiter.clients()); 2// { client: <Redis Client>, subscriber: <Redis Client> }
Additional Clustering information
- Bottleneck is compatible with Redis Clusters, but you must use the
ioredis
datastore and theclusterNodes
option. - Bottleneck is compatible with Redis Sentinel, but you must use the
ioredis
datastore. - Bottleneck's data is stored in Redis keys starting with
b_
. It also uses pubsub channels starting withb_
It will not interfere with any other data stored on the server. - Bottleneck loads a few Lua scripts on the Redis server using the
SCRIPT LOAD
command. These scripts only take up a few Kb of memory. Running theSCRIPT FLUSH
command will cause any connected limiters to experience critical errors until a new limiter connects to Redis and loads the scripts again. - The Lua scripts are highly optimized and designed to use as few resources as possible.
Managing Redis Connections
Bottleneck needs to create 2 Redis Clients to function, one for normal operations and one for pubsub subscriptions. These 2 clients are kept in a Bottleneck.RedisConnection
(NodeRedis) or a Bottleneck.IORedisConnection
(ioredis) object, referred to as the Connection object.
By default, every Group and every standalone limiter (a limiter not created by a Group) will create their own Connection object, but it is possible to manually control this behavior. In this example, every Group and limiter is sharing the same Connection object and therefore the same 2 clients:
1const connection = new Bottleneck.RedisConnection({
2 clientOptions: {/* NodeRedis/ioredis options */}
3 // ioredis also accepts `clusterNodes` here
4});
5
6
7const limiter = new Bottleneck({ connection: connection });
8const group = new Bottleneck.Group({ connection: connection });
You can access and reuse the Connection object of any Group or limiter:
1const group = new Bottleneck.Group({ connection: limiter.connection });
When a Connection object is created manually, the connectivity "error"
events are emitted on the Connection itself.
1connection.on("error", (err) => { /* handle connectivity errors here */ });
If you already have a NodeRedis/ioredis client, you can ask Bottleneck to reuse it, although currently the Connection object will still create a second client for pubsub operations:
1import Redis from "redis";
2const client = new Redis.createClient({/* options */});
3
4const connection = new Bottleneck.RedisConnection({
5 // `clientOptions` and `clusterNodes` will be ignored since we're passing a raw client
6 client: client
7});
8
9const limiter = new Bottleneck({ connection: connection });
10const group = new Bottleneck.Group({ connection: connection });
Depending on your application, using more clients can improve performance.
Use the disconnect(flush)
method to close the Redis clients.
1limiter.disconnect(); 2group.disconnect();
If you created the Connection object manually, you need to call connection.disconnect()
instead, for safety reasons.
Debugging your application
Debugging complex scheduling logic can be difficult, especially when priorities, weights, and network latency all interact with one another.
If your application is not behaving as expected, start by making sure you're catching "error"
events emitted by your limiters and your Groups. Those errors are most likely uncaught exceptions from your application code.
Make sure you've read the 'Gotchas' section.
To see exactly what a limiter is doing in real time, listen to the "debug"
event. It contains detailed information about how the limiter is executing your code. Adding job IDs to all your jobs makes the debug output more readable.
When Bottleneck has to fail one of your jobs, it does so by using BottleneckError
objects. This lets you tell those errors apart from your own code's errors:
1limiter.schedule(fn) 2.then((result) => { /* ... */ } ) 3.catch((error) => { 4 if (error instanceof Bottleneck.BottleneckError) { 5 /* ... */ 6 } 7});
Upgrading to v2
The internal algorithms essentially haven't changed from v1, but many small changes to the interface were made to introduce new features.
All the breaking changes:
- Bottleneck v2 requires Node 6+ or a modern browser. Use
require("bottleneck/es5")
if you need ES5 support in v2. Bottleneck v1 will continue to use ES5 only. - The Bottleneck constructor now takes an options object. See Constructor.
- The
Cluster
feature is now calledGroup
. This is to distinguish it from the new v2 Clustering feature. - The
Group
constructor takes an options object to match the limiter constructor. - Jobs take an optional options object. See Job options.
- Removed
submitPriority()
, usesubmit()
with an options object instead. - Removed
schedulePriority()
, useschedule()
with an options object instead. - The
rejectOnDrop
option is nowtrue
by default. It can be set tofalse
if you wish to retain v1 behavior. However this option is left undocumented as enabling it is considered to be a poor practice. - Use
null
instead of0
to indicate an unlimitedmaxConcurrent
value. - Use
null
instead of-1
to indicate an unlimitedhighWater
value. - Renamed
changeSettings()
toupdateSettings()
, it now returns a promise to indicate completion. It takes the same options object as the constructor. - Renamed
nbQueued()
toqueued()
. - Renamed
nbRunning
torunning()
, it now returns its result using a promise. - Removed
isBlocked()
. - Changing the Promise library is now done through the options object like any other limiter setting.
- Removed
changePenalty()
, it is now done through the options object like any other limiter setting. - Removed
changeReservoir()
, it is now done through the options object like any other limiter setting. - Removed
stopAll()
. Use the newstop()
method. check()
now accepts an optionalweight
argument, and returns its result using a promise.- Removed the
Group
changeTimeout()
method. Instead, pass atimeout
option when creating a Group.
Version 2 is more user-friendly and powerful.
After upgrading your code, please take a minute to read the Debugging your application chapter.
Contributing
This README is always in need of improvements. If wording can be clearer and simpler, please consider forking this repo and submitting a Pull Request, or simply opening an issue.
Suggestions and bug reports are also welcome.
To work on the Bottleneck code, simply clone the repo, makes your changes to the files located in src/
only, then run ./scripts/build.sh && npm test
to ensure that everything is set up correctly.
To speed up compilation time during development, run ./scripts/build.sh dev
instead. Make sure to build and test without dev
before submitting a PR.
The tests must also pass in Clustering mode and using the ES5 bundle. You'll need a Redis server running locally (latency needs to be minimal to run the tests). If the server isn't using the default hostname and port, you can set those in the .env
file. Then run ./scripts/build.sh && npm run test-all
.
All contributions are appreciated and will be considered.
No vulnerabilities found.
Reason
no binaries found in the repo
Reason
license file detected
Details
- Info: project has a license file: LICENSE:0
- Info: FSF or OSI recognized license: MIT License: LICENSE:0
Reason
Found 3/27 approved changesets -- score normalized to 1
Reason
0 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 0
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
security policy file not detected
Details
- Warn: no security policy file detected
- Warn: no security file to analyze
- Warn: no security file to analyze
- Warn: no security file to analyze
Reason
project is not fuzzed
Details
- Warn: no fuzzer integrations found
Reason
branch protection not enabled on development/release branches
Details
- Warn: branch protection not enabled for branch 'master'
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
- Warn: 0 commits out of 6 are checked with a SAST tool
Reason
28 existing vulnerabilities detected
Details
- Warn: Project is vulnerable to: GHSA-67hx-6x53-jw92
- Warn: Project is vulnerable to: GHSA-93q8-gq69-wqmw
- Warn: Project is vulnerable to: GHSA-grv7-fg5c-xmjg
- Warn: Project is vulnerable to: GHSA-w8qv-6jwh-64r5
- Warn: Project is vulnerable to: GHSA-3xgq-45jj-v275
- Warn: Project is vulnerable to: GHSA-gxpj-cx7g-858c
- Warn: Project is vulnerable to: GHSA-w573-4hg7-7wgq
- Warn: Project is vulnerable to: GHSA-phwq-j96m-2c2q
- Warn: Project is vulnerable to: GHSA-ghr5-ch3p-vcr6
- Warn: Project is vulnerable to: GHSA-2j2x-2gpw-g8fm
- Warn: Project is vulnerable to: GHSA-9c47-m6qq-7p4h
- Warn: Project is vulnerable to: GHSA-6c8f-qphg-qjgp
- Warn: Project is vulnerable to: GHSA-jf85-cpcp-j695
- Warn: Project is vulnerable to: GHSA-p6mc-m468-83gw
- Warn: Project is vulnerable to: GHSA-29mw-wpgm-hmr9
- Warn: Project is vulnerable to: GHSA-35jh-r3h4-6jhm
- Warn: Project is vulnerable to: GHSA-952p-6rrq-rcjv
- Warn: Project is vulnerable to: GHSA-f8q6-p94x-37v3
- Warn: Project is vulnerable to: GHSA-vh95-rmgr-6w4m / GHSA-xvch-5gv4-984h
- Warn: Project is vulnerable to: GHSA-fhjf-83wg-r2j9
- Warn: Project is vulnerable to: GHSA-hj48-42vr-x3v9
- Warn: Project is vulnerable to: GHSA-35q2-47q7-3pc3
- Warn: Project is vulnerable to: GHSA-gcx4-mw62-g8wm
- Warn: Project is vulnerable to: GHSA-c2qf-rxjj-qqgw
- Warn: Project is vulnerable to: GHSA-4g88-fppr-53pp
- Warn: Project is vulnerable to: GHSA-4jqc-8m5r-9rpr
- Warn: Project is vulnerable to: GHSA-c4w7-xm78-47vh
- Warn: Project is vulnerable to: GHSA-p9pc-299p-vxgp
Score
1.9
/10
Last Scanned on 2024-11-25
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More