🐦 A resilience and fault-handling library. Supports Backoffs, Retries, Circuit Breakers, Timeouts, Bulkhead Isolation, and Fallbacks.
Installations
npm install cockatiel
Developer Guide
Typescript
Yes
Module System
CommonJS
Min. Node Version
>=16
Node Version
20.12.1
NPM Version
10.5.0
Score
99.6
Supply Chain
100
Quality
78.6
Maintenance
100
Vulnerability
100
License
Contributors
Unable to fetch Contributors
Languages
TypeScript (99.91%)
JavaScript (0.09%)
Developer
connor4312
Download Statistics
Total Downloads
12,483,738
Last Day
9,276
Last Week
118,541
Last Month
700,141
Last Year
5,761,625
GitHub Statistics
1,596 Stars
111 Commits
51 Forks
12 Watching
3 Branches
11 Contributors
Bundle Size
18.20 kB
Minified
5.12 kB
Minified + Gzipped
Package Meta Information
Latest Version
3.2.1
Package Id
cockatiel@3.2.1
Unpacked Size
454.52 kB
Size
69.92 kB
File Count
177
NPM Version
10.5.0
Node Version
20.12.1
Publised On
22 Jul 2024
Total Downloads
Cumulative downloads
Total Downloads
12,483,738
Last day
-70.1%
9,276
Compared to previous day
Last week
-31.9%
118,541
Compared to previous week
Last month
3.2%
700,141
Compared to previous month
Last year
79.5%
5,761,625
Compared to previous year
Daily Downloads
Weekly Downloads
Monthly Downloads
Yearly Downloads
Cockatiel
Cockatiel is resilience and transient-fault-handling library that allows developers to express policies such as Retry, Circuit Breaker, Timeout, Bulkhead Isolation, and Fallback. .NET has Polly, a wonderful one-stop shop for all your fault handling needs--I missed having such a library for my JavaScript projects, and grew tired of copy-pasting retry logic between my projects. Hence, this module!
npm install --save cockatiel
Then go forth with confidence:
1import { 2 ConsecutiveBreaker, 3 ExponentialBackoff, 4 retry, 5 handleAll, 6 circuitBreaker, 7 wrap, 8} from 'cockatiel'; 9import { database } from './my-db'; 10 11// Create a retry policy that'll try whatever function we execute 3 12// times with a randomized exponential backoff. 13const retryPolicy = retry(handleAll, { maxAttempts: 3, backoff: new ExponentialBackoff() }); 14 15// Create a circuit breaker that'll stop calling the executed function for 10 16// seconds if it fails 5 times in a row. This can give time for e.g. a database 17// to recover without getting tons of traffic. 18const circuitBreakerPolicy = circuitBreaker(handleAll, { 19 halfOpenAfter: 10 * 1000, 20 breaker: new ConsecutiveBreaker(5), 21}); 22 23// Combine these! Create a policy that retries 3 times, calling through the circuit breaker 24const retryWithBreaker = wrap(retryPolicy, circuitBreakerPolicy); 25 26exports.handleRequest = async (req, res) => { 27 // Call your database safely! 28 const data = await retryWithBreaker.execute(() => database.getInfo(req.params.id)); 29 return res.json(data); 30};
I recommend reading the Polly wiki for more information for details and mechanics around the patterns we provide.
Table of Contents
IPolicy
(the shape of a policy)Policy
- Events
retry(policy, options)
circuitBreaker(policy, { halfOpenAfter, breaker[, initialState] })
timeout(duration, strategy)
bulkhead(limit[, queue])
fallback(policy, valueOrFactory)
- See Also
IPolicy
(the shape of a policy)
All Cockatiel fault handling policies (fallbacks, circuit breakers, bulkheads, timeouts, retries) adhere to the same interface. In TypeScript, this is given as:
1export interface IPolicy<ContextType extends { signal: AbortSignal }> { 2 /** 3 * Fires on the policy when a request successfully completes and some 4 * successful value will be returned. In a retry policy, this is fired once 5 * even if the request took multiple retries to succeed. 6 */ 7 readonly onSuccess: Event<ISuccessEvent>; 8 9 /** 10 * Fires on the policy when a request fails *due to a handled reason* fails 11 * and will give rejection to the called. 12 */ 13 readonly onFailure: Event<IFailureEvent>; 14 15 /** 16 * Runs the function through behavior specified by the policy. 17 */ 18 execute<T>(fn: (context: ContextType) => PromiseLike<T> | T, signal?: AbortSignal): Promise<T>; 19}
If you don't read TypeScript often, here's what it means:
-
There are two events,
onSuccess
/onFailure
, that are called when a call succeeds or fails. Note thatonFailure
only is called if a handled error is thrown.As a design decision, Cockatiel won't assume all thrown errors are actually failures unless you tell us. For example, in your application you might have errors thrown if the user submits invalid input, and triggering fault handling behavior for this reason would not be desirable!
-
There's an
execute
function that you can use to "wrap" your own function. Anything you return from that function is returned, in a promise, fromexecute
. You can optionally pass an abort signal to theexecute()
function, and the function will always be called with an object at least containing an abort signal (some policies might add extra metadata for you).
Policy
The Policy defines how errors and results are handled. Everything in Cockatiel ultimately deals with handling errors or bad results. The Policy sets up how
handleAll
A generic policy to handle all errors.
1import { handleAll } from 'cockatiel'; 2 3retry(handleAll /* ... */);
handleType(ctor[, filter])
/ policy.orType(ctor[, filter])
Tells the policy to handle errors of the given type, passing in the contructor. If a filter
function is also passed, we'll only handle errors if that also returns true.
1import { handleType } from 'cockatiel'; 2 3handleType(NetworkError).orType(HttpError, err => err.statusCode === 503); 4// ...
handleWhen(filter)
/ policy.orWhen(filter)
Tells the policy to handle any error for which the filter returns truthy
1import { handleWhen } from 'cockatiel';
2
3handleWhen(err => err instanceof NetworkError).orWhen(err => err.shouldRetry === true);
4// ...
handleResultType(ctor[, filter])
/ policy.orResultType(ctor[, filter])
Tells the policy to treat certain return values of the function as errors--retrying if they appear, for instance. Results will be retried if they're an instance of the given class. If a filter
function is also passed, we'll only treat return values as errors if that also returns true.
1import { handleResultType } from 'cockatiel'; 2 3handleResultType(ReturnedNetworkError).orResultType(HttpResult, res => res.statusCode === 503); 4// ...
handleWhenResult(filter)
/ policy.orWhenResult(filter)
Tells the policy to treat certain return values of the function as errors--retrying if they appear, for instance. Results will be retried the filter function returns true.
1import { handleWhenResult } from 'cockatiel';
2
3handleWhenResult(res => res.statusCode === 503).orWhenResult(res => res.statusCode === 429);
4// ...
wrap(...policies)
Wraps the given set of policies into a single policy. For instance, this:
1const result = await retry.execute(() => 2 breaker.execute(() => timeout.execute(({ signal }) => getData(signal))), 3);
Is the equivalent to:
1import { wrap } from 'cockatiel'; 2 3const result = await wrap(retry, breaker, timeout).execute(({ signal }) => getData(signal));
The context
argument passed to the executed function is the merged object of all previous policies. So for instance, in the above example you'll get the abort signal from the TimeoutPolicy as well as the attempt number from the RetryPolicy:
1import { wrap } from 'cockatiel'; 2 3wrap(retry, breaker, timeout).execute(context => { 4 console.log(context); 5 // => { attempts: 1, cancellation: } 6});
The individual wrapped policies are accessible on the policies
property of the policy returned from wrap()
.
@usePolicy(policy)
A decorator that can be used to wrap class methods and apply the given policy to them. It also adds the last argument normally given in Policy.execute
as the last argument in the function call. For example:
1import { usePolicy, handleAll, retry } from 'cockatiel'; 2 3const retry = retry(handleAll, { attempts: 3 }); 4 5class Database { 6 @usePolicy(retry) 7 public getUserInfo(userId, context) { 8 console.log('Retry attempt number', context.attempt); 9 // implementation here 10 } 11} 12 13const db = new Database(); 14db.getUserInfo(3).then(info => console.log('User 3 info:', info));
Note that it will force the return type to be a Promise, since that's what policies return.
noop
A no-op policy, which may be useful for tests and stubs.
1import { noop, handleAll, retry } from 'cockatiel'; 2 3const policy = isProduction ? retry(handleAll, { attempts: 3 }) : noop; 4 5export async function handleRequest() { 6 return policy.execute(() => getInfoFromDatabase()); 7}
Events
Cockatiel uses a simple bespoke style for events, similar to those that we use in VS Code. These events provide better type-safety (you can never subscribe to the wrong event name) and better functionality around triggering listeners.
An event can be subscribed to simply by passing a callback. Take onFailure
for instance:
1const listener = policy.onFailure(error => { 2 console.log(error); 3});
The event returns an IDisposable
instance. To unsubscribe the listener, call .dispose()
on the returned instance. It's always safe to call an IDisposable's .dispose()
multiple times.
1listener.dispose();
We provide a couple extra utilities around events as well.
Event.toPromise(event[, signal])
Returns a promise that resolves once the event fires. Optionally, you can pass in an AbortSignal to control when you stop listening, which will reject the promise with a TaskCancelledError
if it's not already resolved.
1import { Event } from 'cockatiel'; 2 3async function waitForFallback(policy) { 4 await Event.toPromise(policy.onFallback); 5 console.log('a fallback happened!'); 6}
Event.once(event, callback)
Waits for the event to fire once, and then automatically unregisters the listener. This method itself returns an IDisposable
, which you could use to unregister the listener if needed.
1import { Event } from 'cockatiel'; 2 3async function waitForFallback(policy) { 4 Event.once(policy.onFallback, () => { 5 console.log('a fallback happened!'); 6 }); 7}
retry(policy, options)
retry()
uses a Policy to retry running something multiple times. Like other builders, you can use a retry builder between multiple calls.
To use retry()
, first pass in the Policy to use, and then the options. The options are an object containing:
-
maxAttempts
: the number of attempts to make before giving up -
backoff
: a generator that tells Cockatiel how long to wait between attempts. A number of backoff implementations are provided out of the box:
Here are some examples:
1import { retry, handleAll, handleType, ExponentialBackoff } from 'cockatiel'; 2 3const response1 = await retry( 4 handleAll, // handle all errors 5 { maxAttempts: 3 }, // retry three times, with no backoff 6).execute(() => getJson('https://example.com')); 7 8const response2 = await retry( 9 handleType(NetworkError), // handle only network errors, 10 { maxAttempts: 3, backoff: new ExponentialBackoff() }, // backoff exponentially 3 times 11).execute(() => getJson('https://example.com'));
Backoffs
Backoff algorithms are immutable. The backoff class adheres to the interface:
1export interface IBackoffFactory<T> { 2 /** 3 * Returns the next backoff duration. 4 */ 5 next(context: T): IBackoff<T>; 6}
The backoff, returned from the next()
call, has the appropriate delay and next()
method again.
1export interface IBackoff<T> { 2 next(context: T): IBackoff<T>; // same as above 3 4 /** 5 * Returns the number of milliseconds to wait for this backoff attempt. 6 */ 7 readonly duration: number; 8}
ConstantBackoff
A backoff that backs off for a constant amount of time.
1import { ConstantBackoff } from 'cockatiel'; 2 3// Waits 50ms between back offs, forever 4const foreverBackoff = new ConstantBackoff(50);
ExponentialBackoff
Tip: exponential backoffs and circuit breakers are great friends!
The crowd favorite. By default, it uses a decorrelated jitter algorithm, which is a good default for most applications. Takes in an options object, which can have any of these properties:
1export interface IExponentialBackoffOptions<S> { 2 /** 3 * Delay generator function to use. This package provides several of these/ 4 * Defaults to "decorrelatedJitterGenerator", a good default for most 5 * scenarios (see the linked Polly issue). 6 * 7 * @see https://github.com/App-vNext/Polly/issues/530 8 * @see https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/ 9 */ 10 generator: GeneratorFn<S>; 11 12 /** 13 * Maximum delay, in milliseconds. Defaults to 30s. 14 */ 15 maxDelay: number; 16 17 /** 18 * Backoff exponent. Defaults to 2. 19 */ 20 exponent: number; 21 22 /** 23 * The initial, first delay of the backoff, in milliseconds. 24 * Defaults to 128ms. 25 */ 26 initialDelay: number; 27}
Example:
1import { ExponentialBackoff, noJitterGenerator } from 'cockatiel';
2
3// Use all the defaults. Decorrelated jitter, 30 seconds max delay, infinite attempts:
4const defaultBackoff = new ExponentialBackoff();
5
6// Have some lower limits:
7const limitedBackoff = new ExponentialBackoff({ maxDelay: 1000, initialDelay: 4 });
8
9// Use a backoff without jitter
10const limitedBackoff = new ExponentialBackoff({ generator: noJitterGenerator });
Several jitter strategies are provided. This AWS blog post has more information around the strategies and why you might want to use them. The available jitter generators exported from cockatiel
are:
decorrelatedJitterGenerator
-- The default implementation, the one that Polly.Contrib.WaitAndRetry usesnoJitterGenerator
-- Does not add any jitterfullJitterGenerator
-- Jitters between[0, interval)
halfJitterGenerator
-- Jitters between[interval / 2, interval)
IterableBackoff
Takes in a list of delays, and goes through them one by one. When it reaches the end of the list, the backoff will continue to use the last value.
1import { IterableBackoff } from 'cockatiel'; 2 3// Wait 100ms, 200ms, and then 500ms between attempts: 4const backoff = new IterableBackoff([100, 200, 500]);
DelegateBackoff
Delegates determining the backoff to the given function. The function should return a number of milliseconds to wait.
1import { DelegateBackoff } from 'cockatiel'; 2 3// Try with any random delay up to 500ms 4const backoff = new DelegateBackoff(context => Math.random() * 500));
The first parameter is the generic context
in which the backoff is being used. For retries, the context is an interface like this:
1export interface IRetryBackoffContext<ReturnType> { 2 /** 3 * The retry attempt, starting at 1 for calls into backoffs. 4 */ 5 attempt: number; 6 7 /** 8 * The result of the last method call. Either a thrown error, or a value 9 * that we determined should be retried upon. 10 */ 11 result: { error: Error } | { value: ReturnType }; 12}
You can also take in a state
as the second parameter, and return an object containing the { state: S, delay: number }
. Here's both of those in action that we use to create a backoff policy that will stop backing off if we get the same error twice in a row, otherwise do an exponential backoff:
1import { DelegateBackoff } from 'cockatiel'; 2 3const myDelegateBackoff = new DelegateBackoff((context, lastError) => { 4 if (context.result.error && context.result.error === lastError) { 5 throw context.result.error; 6 } 7 8 return { delay: 100 * Math.pow(2, context.count), state: context.result.error }; 9});
retry.execute(fn[, signal])
Executes the function. The current retry context, containing the attempts and abort token, { attempt: number, signal: AbortSignal }
, is passed as the function's first argument. The function should throw, return a promise, or return a value, which get handled as configured in the Policy.
If the function doesn't succeed before the backoff ceases or cancellation is requested, the last error thrown will be bubbled up, or the last result will be returned (if you used any of the handleResult*
methods).
1await retry(handleAll, { maxAttempts: 3 }).execute(() => getJson('https://example.com'));
retry.dangerouslyUnref()
When retrying, a referenced timer is created. This means the Node.js event loop is kept active while we're delaying a retried call. Calling this method on the retry builder will unreference the timer, allowing the process to exit even if a retry might still be pending:
1const response1 = await retry(handleAll, { maxAttempts: 3 }) 2 .dangerouslyUnref() 3 .execute(() => getJson('https://example.com'));
retry.onRetry(callback)
An event emitter that fires when we retry a call, before any backoff. It's invoked with an object that includes:
- the
delay
we're going to wait before retrying, - the
attempt
number of the upcoming retry, starting at1
, and; - either a thrown error like
{ error: someError, delay: number }
, or an errorful result in an object like{ value: someValue, delay: number }
when using result filtering.
Useful for telemetry. Returns a disposable instance.
1const listener = retry.onRetry(reason => console.log('retrying a function call:', reason));
2
3// ...
4
5listener.dispose();
retry.onSuccess(callback)
An event emitter that fires whenever a function is successfully called. It's invoked with an object containing the duration in milliseconds to nanosecond precision.
1const listener = retry.onSuccess(({ duration }) => { 2 console.log(`retry call ran in ${duration}ms`); 3}); 4 5// ... 6 7listener.dispose();
retry.onFailure(callback)
An event emitter that fires whenever a function throw an error or returns an errorful result. It's invoked with the duration of the call, the reason for the failure, and an boolean indicating whether the error is handled by the policy.
1const listener = retry.onFailure(({ duration, handled, reason }) => { 2 console.log(`retry call ran in ${duration}ms and failed with`, reason); 3 console.log(handled ? 'error was handled' : 'error was not handled'); 4}); 5 6// later: 7listener.dispose();
retry.onGiveUp(callback)
An event emitter that fires when we're no longer retrying a call and are giving up. It's invoked with either a thrown error in an object like { error: someError }
, or an errorful result in an object like { value: someValue }
when using result filtering. Useful for telemetry. Returns a disposable instance.
1const listener = retry.onGiveUp(reason => console.log('retrying a function call:', reason));
2
3listener.dispose();
circuitBreaker(policy, { halfOpenAfter, breaker[, initialState] })
Circuit breakers stop execution for a period of time after a failure threshold has been reached. This is very useful to allow faulting systems to recover without overloading them. See the Polly docs for more detailed information around circuit breakers.
It's important that you reuse the same circuit breaker across multiple requests, otherwise it won't do anything!
To create a breaker, you use a Policy like you normally would, and call circuitBreaker()
.
-
The
halfOpenAfter
option is the number of milliseconds after which we should try to close the circuit after failure ('closing the circuit' means restarting requests).You may also pass a backoff strategy instead of a constant number of milliseconds if you wish to increase the interval between consecutive failing half-open checks.
-
The
breaker
is the breaker policy which controls when the circuit opens. -
The
initialState
option can be passed if you're hydrating the breaker from state collectiond from previous execution using breaker.toJSON().
Calls to execute()
while the circuit is open (not taking requests) will throw a BrokenCircuitError
.
1import { 2 circuitBreaker, 3 handleAll, 4 BrokenCircuitError, 5 ConsecutiveBreaker, 6 SamplingBreaker, 7 ExponentialBackoff, 8} from 'cockatiel'; 9 10// Break if more than 20% of requests fail in a 30 second time window: 11const breaker = circuitBreaker(handleAll, { 12 halfOpenAfter: 10 * 1000, 13 breaker: new SamplingBreaker({ threshold: 0.2, duration: 30 * 1000 }), 14}); 15 16// Break if more than 5 requests in a row fail, and use a backoff for retry attempts: 17const breaker = circuitBreaker(handleAll, { 18 halfOpenAfter: new ExponentialBackoff(), 19 breaker: new ConsecutiveBreaker(5), 20}); 21 22// Get info from the database, or return 'service unavailable' if it's down/recovering 23export async function handleRequest() { 24 try { 25 return await breaker.execute(() => getInfoFromDatabase()); 26 } catch (e) { 27 if (e instanceof BrokenCircuitError) { 28 return 'service unavailable'; 29 } else { 30 throw e; 31 } 32 } 33}
Breakers
ConsecutiveBreaker
The ConsecutiveBreaker
breaks after n
requests in a row fail. Simple, easy.
1// Break if more than 5 requests in a row fail:
2const breaker = circuitBreaker(handleAll, {
3 halfOpenAfter: 10 * 1000,
4 breaker: new ConsecutiveBreaker(5),
5});
CountBreaker
The CountBreaker
breaks after a proportion of requests in a count based sliding window fail. It is inspired by the Count-based sliding window in Resilience4j.
1// Break if more than 20% of requests fail in a sliding window of size 100:
2const breaker = circuitBreaker(handleAll, {
3 halfOpenAfter: 10 * 1000,
4 breaker: new CountBreaker({ threshold: 0.2, size: 100 }),
5});
You can specify a minimum minimum-number-of-calls value to use, to avoid opening the circuit when there are only few samples in the sliding window. By default this value is set to the sliding window size, but you can override it if necessary:
1const breaker = circuitBreaker(handleAll, {
2 halfOpenAfter: 10 * 1000,
3 breaker: new CountBreaker({
4 threshold: 0.2,
5 size: 100,
6 minimumNumberOfCalls: 50, // require 50 requests before we can break
7 }),
8});
SamplingBreaker
The SamplingBreaker
breaks after a proportion of requests over a time period fail.
1// Break if more than 20% of requests fail in a 30 second time window:
2const breaker = circuitBreaker(handleAll, {
3 halfOpenAfter: 10 * 1000,
4 breaker: new SamplingBreaker({ threshold: 0.2, duration: 30 * 1000 }),
5});
You can specify a minimum requests-per-second value to use to avoid opening the circuit under periods of low load. By default we'll choose a value such that you need 5 failures per second for the breaker to kick in, and you can configure this if it doesn't work for you:
1const breaker = circuitBreaker(handleAll, {
2 halfOpenAfter: 10 * 1000,
3 breaker: new SamplingBreaker({
4 threshold: 0.2,
5 duration: 30 * 1000,
6 minimumRps: 10, // require 10 requests per second before we can break
7 }),
8});
breaker.execute(fn[, signal])
Executes the function. May throw a BrokenCircuitError
if the circuit is open. If a half-open test is currently running and it succeeds, the circuit breaker will check the abort signal (possibly throwing a TaskCancelledError
) before continuing to run the inner function.
Otherwise, it calls the inner function and returns what it returns, or throws what it throws.
Like all Policy.execute
methods, any propagated { signal: AbortSignal }
will be given as the first argument to fn
.
1const response = await breaker.execute(() => getJson('https://example.com'));
breaker.state
The current state of the circuit breaker, allowing for introspection.
1import { CircuitState } from 'cockatiel'; 2 3if (breaker.state === CircuitState.Open) { 4 console.log('the circuit is open right now'); 5}
breaker.onBreak(callback)
An event emitter that fires when the circuit opens as a result of failures. Returns a disposable instance.
1const listener = breaker.onBreak(() => console.log('circuit is open')); 2 3listener.dispose();
breaker.onReset(callback)
An event emitter that fires when the circuit closes after being broken. Returns a disposable instance.
1const listener = breaker.onReset(() => console.log('circuit is closed')); 2 3listener.dispose();
breaker.onHalfOpen(callback)
An event emitter when the circuit breaker is half open (running a test call). Either onBreak
on onReset
will subsequently fire.
1const listener = breaker.onHalfOpen(() => console.log('circuit is testing a request')); 2 3listener.dispose();
breaker.onStateChange(callback)
An event emitter that fires whenever the circuit state changes in general, after the more specific onReset
, onHalfOpen
, onBreak
emitters fires.
1import { CircuitState } from 'cockatiel'; 2 3const listener = breaker.onStateChange(state => { 4 if (state === CircuitState.Closed) { 5 console.log('circuit breaker is once again closed'); 6 } 7}); 8 9listener.dispose();
breaker.onSuccess(callback)
An event emitter that fires whenever a function is successfully called. It's invoked with an object containing the duration in milliseconds to nanosecond precision.
1const listener = breaker.onSuccess(({ duration }) => { 2 console.log(`circuit breaker call ran in ${duration}ms`); 3}); 4 5// later: 6listener.dispose();
breaker.onFailure(callback)
An event emitter that fires whenever a function throw an error or returns an errorful result. It's invoked with the duration of the call, the reason for the failure, and an boolean indicating whether the error is handled by the policy.
1const listener = breaker.onFailure(({ duration, handled, reason }) => { 2 console.log(`circuit breaker call ran in ${duration}ms and failed with`, reason); 3 console.log(handled ? 'error was handled' : 'error was not handled'); 4}); 5 6// later: 7listener.dispose();
breaker.isolate()
Manually holds the circuit open, until the returned disposable is disposed of. While held open, the circuit will throw IsolatedCircuitError
, a type of BrokenCircuitError
, on attempted executions. It's safe to have multiple isolate()
calls; we'll refcount them behind the scenes.
1const handle = breaker.isolate(); 2 3// later, allow calls again: 4handle.dispose();
breaker.toJSON()
Returns the circuit breaker state so that it can be re-created later. This is useful in cases like serverless functions where you may want to keep the breaker state across multiple executions.
1const breakerState = breaker.toJSON();
2
3// ...in a later execution
4
5const breaker = circuitBreaker(policy, {
6 halfOpenAfter: 1000,
7 breaker: new ConsecutiveBreaker(3),
8 initialState: breakerState,
9});
Note that if the breaker is currently half open, the serialized state will record it in such a way that it's open when restored and will use the first call as the half-open test.
timeout(duration, strategy)
Creates a timeout policy. The duration specifies how long to wait before timing out execute()
'd functions. The strategy for timeouts, "Cooperative" or "Aggressive". An AbortSignal will be pass to any executed function, and in cooperative timeouts we'll simply wait for that function to return or throw. In aggressive timeouts, we'll immediately throw a TaskCancelledError when the timeout is reached, in addition to marking the passed token as failed.
1import { TimeoutStrategy, timeout, TaskCancelledError } from 'cockatiel'; 2 3const timeout = timeout(2000, TimeoutStrategy.Cooperative); 4 5export async function handleRequest() { 6 try { 7 return await timeout.execute(signal => getInfoFromDatabase(signal)); 8 } catch (e) { 9 if (e instanceof TaskCancelledError) { 10 return 'database timed out'; 11 } else { 12 throw e; 13 } 14 } 15}
timeout.dangerouslyUnref()
When timing out, a referenced timer is created. This means the Node.js event loop is kept active while we're waiting for the timeout, as long as the function hasn't returned. Calling this method on the timeout builder will unreference the timer, allowing the process to exit even if a timeout might still be happening.
timeout.execute(fn[, signal])
Executes the given function as configured in the policy. An AbortSignal will be pass to the function, which it should use for aborting operations as needed. If cancellation is requested on the parent abort signal provided as the second argument to execute()
, the cancellation will be propagated.
1await timeout.execute(({ signal }) => getInfoFromDatabase(signal));
timeout.onTimeout(callback)
An event emitter that fires when a timeout is reached. Useful for telemetry. Returns a disposable instance.
In the "aggressive" timeout strategy, a timeout event will immediately preceed a failure event and promise rejection. In the cooperative timeout strategy, the timeout event is still emitted, but the success or failure is determined by what the executed function throws or returns.
1const listener = timeout.onTimeout(() => console.log('timeout was reached')); 2 3listener.dispose();
timeout.onSuccess(callback)
An event emitter that fires whenever a function is successfully called. It's invoked with an object containing the duration in milliseconds to nanosecond precision.
1const listener = timeout.onSuccess(({ duration }) => { 2 console.log(`timeout call ran in ${duration}ms`); 3}); 4 5// later: 6listener.dispose();
timeout.onFailure(callback)
An event emitter that fires whenever a function throw an error or returns an errorful result. It's invoked with the duration of the call, the reason for the failure, and an boolean indicating whether the error is handled by the policy.
This is only called when the function itself fails, and not when a timeout happens.
1const listener = timeout.onFailure(({ duration, handled, reason }) => { 2 console.log(`timeout call ran in ${duration}ms and failed with`, reason); 3 console.log(handled ? 'error was handled' : 'error was not handled'); 4}); 5 6// later: 7listener.dispose();
bulkhead(limit[, queue])
A Bulkhead is a simple structure that limits the number of concurrent calls. Attempting to exceed the capacity will cause execute()
to throw a BulkheadRejectedError
.
1import { bulkhead } from 'cockatiel'; 2 3const bulkhead = bulkhead(12); // limit to 12 concurrent calls 4 5export async function handleRequest() { 6 try { 7 return await bulkhead.execute(() => getInfoFromDatabase()); 8 } catch (e) { 9 if (e instanceof BulkheadRejectedError) { 10 return 'too much load, try again later'; 11 } else { 12 throw e; 13 } 14 } 15}
You can optionally pass a second parameter to bulkhead()
, which will allow for events to be queued instead of rejected after capacity is exceeded. Once again, if this queue fills up, a BulkheadRejectedError
will be thrown.
1const bulkhead = bulkhead(12, 4); // limit to 12 concurrent calls, with 4 queued up
bulkhead.execute(fn[, signal])
Depending on the bulkhead state, either:
- Executes the function immediately and returns its results;
- Queues the function for execution and returns its results when it runs, or;
- Throws a
BulkheadRejectedError
if the configured concurrency and queue limits have been execeeded.
The abort signal is checked (possibly resulting in a TaskCancelledError) when the function is first submitted to the bulkhead, and when it dequeues.
Like all Policy.execute
methods, any propagated { signal: AbortSignal }
will be given as the first argument to fn
.
1const data = await bulkhead.execute(({ signal }) => getInfoFromDatabase(signal));
bulkhead.onReject(callback)
An event emitter that fires when a call is rejected. Useful for telemetry. Returns a disposable instance.
1const listener = bulkhead.onReject(() => console.log('bulkhead call was rejected')); 2 3listener.dispose();
bulkhead.onSuccess(callback)
An event emitter that fires whenever a function is successfully called. It's invoked with an object containing the duration in milliseconds to nanosecond precision.
1const listener = bulkhead.onSuccess(({ duration }) => { 2 console.log(`bulkhead call ran in ${duration}ms`); 3}); 4 5// later: 6listener.dispose();
bulkhead.onFailure(callback)
An event emitter that fires whenever a function throw an error or returns an errorful result. It's invoked with the duration of the call, the reason for the failure, and an boolean indicating whether the error is handled by the policy.
This is only called when the function itself fails, and not when a bulkhead rejection occurs.
1const listener = bulkhead.onFailure(({ duration, handled, reason }) => { 2 console.log(`bulkhead call ran in ${duration}ms and failed with`, reason); 3 console.log(handled ? 'error was handled' : 'error was not handled'); 4}); 5 6// later: 7listener.dispose();
bulkhead.executionSlots
Returns the number of execution slots left in the bulkhead. If either this or bulkhead.queueSlots
is greater than zero, the execute()
will not throw a BulkheadRejectedError
.
bulkhead.queueSlots
Returns the number of queue slots left in the bulkhead. If either this or bulkhead.executionSlots
is greater than zero, the execute()
will not throw a BulkheadRejectedError
.
fallback(policy, valueOrFactory)
Creates a policy that returns the valueOrFactory
if an executed function fails. As the name suggests, valueOrFactory
either be a value, or a function we'll call when a failure happens to create a value.
1import { handleType, fallback } from 'cockatiel'; 2 3const fallback = fallback(handleType(DatabaseError), () => getStaleData()); 4 5export function handleRequest() { 6 return fallback.execute(() => getInfoFromDatabase()); 7}
fallback.execute(fn[, signal])
Executes the given function. Any handled error or errorful value will be eaten, and instead the fallback value will be returned.
Like all Policy.execute
methods, any propagated { signal: AbortSignal }
will be given as the first argument to fn
.
1const result = await fallback.execute(() => getInfoFromDatabase());
fallback.onSuccess(callback)
An event emitter that fires whenever a function is successfully called. It's invoked with an object containing the duration in milliseconds to nanosecond precision.
1const listener = fallback.onSuccess(({ duration }) => { 2 console.log(`fallback call ran in ${duration}ms`); 3}); 4 5// later: 6listener.dispose();
fallback.onFailure(callback)
An event emitter that fires whenever a function throw an error or returns an errorful result. It's invoked with the duration of the call, the reason for the failure, and an boolean indicating whether the error is handled by the policy.
If the error was handled, the fallback will kick in.
1const listener = fallback.onFailure(({ duration, handled, reason }) => { 2 console.log(`fallback call ran in ${duration}ms and failed with`, reason); 3 console.log(handled ? 'error was handled' : 'error was not handled'); 4}); 5 6// later: 7listener.dispose();
See Also
- App-vNext/Polly: the original, .NET implementation of Polly
- polly-js: a similar package with a subset of .NET Polly/Cockatiel functionality
No vulnerabilities found.
Reason
no dangerous workflow patterns detected
Reason
no binaries found in the repo
Reason
license file detected
Details
- Info: project has a license file: LICENSE:0
- Info: FSF or OSI recognized license: MIT License: LICENSE:0
Reason
5 existing vulnerabilities detected
Details
- Warn: Project is vulnerable to: GHSA-3xgq-45jj-v275
- Warn: Project is vulnerable to: GHSA-4q6p-r6v2-jvc5
- Warn: Project is vulnerable to: GHSA-mwcw-c2x4-8c55
- Warn: Project is vulnerable to: GHSA-9wv6-86v2-598j
- Warn: Project is vulnerable to: GHSA-c2qf-rxjj-qqgw
Reason
dependency not pinned by hash detected -- score normalized to 3
Details
- Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/ci.yml:11: update your workflow using https://app.stepsecurity.io/secureworkflow/connor4312/cockatiel/ci.yml/master?enable=pin
- Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/ci.yml:12: update your workflow using https://app.stepsecurity.io/secureworkflow/connor4312/cockatiel/ci.yml/master?enable=pin
- Info: 0 out of 2 GitHub-owned GitHubAction dependencies pinned
- Info: 1 out of 1 npmCommand dependencies pinned
Reason
2 commit(s) and 1 issue activity found in the last 90 days -- score normalized to 2
Reason
Found 6/26 approved changesets -- score normalized to 2
Reason
detected GitHub workflow tokens with excessive permissions
Details
- Warn: no topLevel permission defined: .github/workflows/ci.yml:1
- Info: no jobLevel write permissions found
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
security policy file not detected
Details
- Warn: no security policy file detected
- Warn: no security file to analyze
- Warn: no security file to analyze
- Warn: no security file to analyze
Reason
project is not fuzzed
Details
- Warn: no fuzzer integrations found
Reason
branch protection not enabled on development/release branches
Details
- Warn: branch protection not enabled for branch 'master'
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
- Warn: 0 commits out of 11 are checked with a SAST tool
Score
3.5
/10
Last Scanned on 2024-12-23
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More