Gathering detailed insights and metrics for zero-overhead-promise-lock
Gathering detailed insights and metrics for zero-overhead-promise-lock
Gathering detailed insights and metrics for zero-overhead-promise-lock
Gathering detailed insights and metrics for zero-overhead-promise-lock
An efficient Promise lock for Node.js projects, ensuring mutually exclusive execution of asynchronous tasks. Key features include a backpressure indicator and the ability to gracefully await the completion of all currently executing or pending tasks, making it ideal for robust production applications requiring smooth teardown.
npm install zero-overhead-promise-lock
Typescript
Module System
Min. Node Version
Node Version
NPM Version
74.9
Supply Chain
99.4
Quality
87.5
Maintenance
100
Vulnerability
100
License
Improve documentation and update dev dependency
Updated on Apr 13, 2025
Enable smart reuse via currentExecution getter to expose the active task's promise
Updated on Apr 10, 2025
README improvements
Updated on Feb 24, 2025
Add resilience test, improve README, implementation refinement
Updated on Feb 16, 2025
Bugfix: A rejecting task caused all other pending tasks to reject without execution
Updated on Feb 10, 2025
README and Documentation improvements
Updated on Feb 09, 2025
TypeScript (99.57%)
JavaScript (0.43%)
Total Downloads
2,406
Last Day
16
Last Week
194
Last Month
1,025
Last Year
2,406
Apache-2.0 License
8 Commits
1 Watchers
1 Branches
1 Contributors
Updated on Apr 13, 2025
Minified
Minified + Gzipped
Latest Version
1.2.1
Package Id
zero-overhead-promise-lock@1.2.1
Unpacked Size
110.00 kB
Size
25.01 kB
File Count
11
NPM Version
10.9.2
Node Version
20.13.1
Published on
Apr 13, 2025
Cumulative downloads
Total Downloads
5
The ZeroOverheadLock
class implements a modern Promise-lock for Node.js projects, enabling users to ensure the mutually exclusive execution of specified asynchronous tasks. Key features include:
isAvailable
getter is designed for "check-and-abort" scenarios, enabling operations to be skipped or aborted if the lock is currently held by another task.If your use case involves keyed tasks - where you need to ensure the mutually exclusive execution of tasks associated with the same key - consider using the keyed variant of this package: zero-overhead-keyed-promise-lock. Effectively, a keyed lock functions as a temporary FIFO task queue per key.
waitForAllExistingTasksToComplete
method. Example use cases include application shutdowns (e.g., onModuleDestroy
in Nest.js applications) or maintaining a clear state between unit-tests.isAvailable
getter indicator enables to skip or abort operations if the lock is currently held by another task.currentExecution
getter, which exposes the currently executing task's promise, if one is active.pendingTasksCount
getter provides a real-time metric indicating the current backpressure from tasks waiting for the lock to become available. Users can leverage this data to make informed decisions, such as throttling, load balancing, or managing system load. Additionally, this metric can aid in internal resource management within a containerized environment. If multiple locks exist - each tied to a unique key - a backpressure value of 0 may indicate that a lock is no longer needed and can be removed temporarily to optimize resource usage.tsconfig
target is set to ES2020.Unlike single-threaded C code, the event-loop architecture used in modern JavaScript runtime environments introduces the possibility of race conditions, especially for asynchronous tasks that span multiple event-loop iterations.
In Node.js, synchronous code blocks - those that do not contain an await
keyword - are guaranteed to execute within a single event-loop iteration. These blocks inherently do not require synchronization, as their execution is mutually exclusive by definition and cannot overlap.
In contrast, asynchronous tasks that include at least one await
, necessarily span across multiple event-loop iterations. Such tasks may require synchronization to prevent overlapping executions that could lead to race conditions, resulting in inconsistent or invalid states. Such races occur when event-loop iterations from task A interleave with those from task B, each unaware of the other and potentially acting on an intermediate state.
Additionally, locks are sometimes employed purely for performance optimization, such as throttling, rather than for preventing race conditions. In such cases, the lock effectively functions as a semaphore with a concurrency of 1. For example, limiting concurrent access to a shared resource may be necessary to reduce contention or meet operational constraints.
If your use case requires a concurrency greater than 1, consider using the semaphore variant of this package: zero-backpressure-semaphore-typescript. While semaphores can emulate locks by setting their concurrency to 1, locks provide a more efficient implementation with reduced overhead.
Traditional lock APIs require explicit acquire and release steps, adding overhead and responsibility for the user. Additionally, they introduce the risk of deadlocking the application if one forgets to release, for example, due to a thrown exception.
In contrast, ZeroOverheadLock
manages task execution, abstracting away these details and reducing user responsibility. The acquire and release steps are handled implicitly by the executeExclusive
method, reminiscent of the RAII idiom in C++.
The ZeroOverheadLock
class provides the following methods:
onModuleDestroy
in Nest.js applications) or maintaining a clear state between unit tests.If needed, refer to the code documentation for a more comprehensive description of each method.
The ZeroOverheadLock
class provides the following getter methods to reflect the current lock's state:
In an Intrusion Detection System (IDS), it is common to aggregate non-critical alerts (e.g., low-severity anomalies) in memory and flush them to a database in bulk. This approach minimizes the load caused by frequent writes for non-essential data. The bulk writes occur either periodically or whenever the accumulated data reaches a defined threshold.
Below, we explore implementation options for managing these bulk writes while addressing potential race conditions that could lead to data consistency issues.
The following implementation demonstrates the aggregation logic. For simplicity, error handling is omitted to focus on identifying and fixing the race condition:
1import { IAlertMetadata } from './interfaces'; 2 3export class IntrusionDetectionSystem { 4 private _accumulatedAlerts: Readonly<IAlertMetadata>[] = []; 5 6 constructor(private readonly _maxAccumulatedAlerts: number) {} 7 8 // Naive implementation: 9 public async addAlert(alert: Readonly<IAlertMetadata>): Promise<void> { 10 this._accumulatedAlerts.push(alert); 11 12 if (this._accumulatedAlerts.length >= this._maxAccumulatedAlerts) { 13 await this._flushToDb(this._accumulatedAlerts); 14 this._accumulatedAlerts = []; 15 } 16 } 17 18 private async _flushToDb(alerts: IAlertMetadata[]): Promise<void> { 19 // Perform a bulk write to an external resource. 20 } 21}
Resetting _accumulatedAlerts
only after the bulk-write completes introduces the risk of accumulating additional alerts during the write operation. This can result in duplicate processing or excessive database writes.
To resolve the race condition, the addAlert
logic can be treated as a critical section, protected by a lock:
1import { ZeroOverheadLock } from 'zero-overhead-promise-lock'; 2import { IAlertMetadata } from './interfaces'; 3 4export class IntrusionDetectionSystem { 5 private readonly _accumulationLock = new ZeroOverheadLock<void>(); 6 private _accumulatedAlerts: Readonly<IAlertMetadata>[] = []; 7 8 constructor(private readonly _maxAccumulatedAlerts: number) {} 9 10 public async addAlert(alert: Readonly<IAlertMetadata>): Promise<void> { 11 await this._accumulationLock.executeExclusive(async () => { 12 this._accumulatedAlerts.push(alert); 13 14 if (this._accumulatedAlerts.length >= this._maxAccumulatedAlerts) { 15 await this._flushToDb(this._accumulatedAlerts); 16 this._accumulatedAlerts = []; 17 } 18 }); 19 } 20 21 /** 22 * Gracefully awaits the completion of all ongoing tasks before shutdown. 23 * This method is well-suited for use in `onModuleDestroy` in Nest.js 24 * applications or similar lifecycle scenarios. 25 */ 26 public async onTeardown(): Promise<void> { 27 while (!this._accumulationLock.isAvailable) { 28 await this._accumulationLock.waitForAllExistingTasksToComplete(); 29 } 30 } 31 32 private async _flushToDb(alerts: IAlertMetadata[]): Promise<void> { 33 // Perform a bulk write to an external resource. 34 } 35}
While this ensures correctness, it introduces potential backpressure. The lock prevents concurrent accumulation during a bulk write, possibly blocking alert processing during high throughput periods.
A more efficient solution involves separating the logic for resetting the accumulation array from the bulk write operation itself. This guarantees that only one bulk write is active while allowing uninterrupted accumulation:
1import { ZeroOverheadLock } from 'zero-overhead-promise-lock'; 2import { IAlertMetadata } from './interfaces'; 3 4export class IntrusionDetectionSystem { 5 private readonly _bulkWriteLock = new ZeroOverheadLock<void>(); 6 private _accumulatedAlerts: Readonly<IAlertMetadata>[] = []; 7 8 constructor(private readonly _maxAccumulatedAlerts: number) {} 9 10 public async addAlert(alert: Readonly<IAlertMetadata>): Promise<void> { 11 this._accumulatedAlerts.push(alert); 12 13 if (this._accumulatedAlerts.length < this._maxAccumulatedAlerts) { 14 return; 15 } 16 17 const currentBatch = this._accumulatedAlerts; 18 this._accumulatedAlerts = []; 19 await this._bulkWriteLock.executeExclusive( 20 () => this._flushToDb(currentBatch); 21 ); 22 } 23 24 /** 25 * Gracefully awaits the completion of all ongoing tasks before shutdown. 26 * This method is well-suited for use in `onModuleDestroy` in Nest.js 27 * applications or similar lifecycle scenarios. 28 */ 29 public async onTeardown(): Promise<void> { 30 while (!this._bulkWriteLock.isAvailable) { 31 await this._bulkWriteLock.waitForAllExistingTasksToComplete(); 32 } 33 } 34 35 private async _flushToDb(alerts: IAlertMetadata[]): Promise<void> { 36 // Perform a bulk write to an external resource. 37 } 38}
Consider a non-overlapping variant of setInterval
, designed for asynchronous tasks:
A scheduler component that manages a single recurring task while ensuring executions do not overlap. The scheduler maintains a fixed interval between start times, and if a previous execution is still in progress when a new cycle begins, the new execution is skipped.
Additionally, the component supports graceful teardown, meaning it not only stops future executions but also awaits the completion of any ongoing execution before shutting down.
The isAvailable
lock indicator can be used to determine whether an execution should be skipped:
1import { ZeroOverheadLock } from 'zero-overhead-promise-lock'; 2 3export class NonOverlappingRecurringTask { 4 private readonly _lock = new ZeroOverheadLock<void>(); 5 private _timerHandle?: ReturnType<typeof setInterval>; 6 7 constructor( 8 private readonly _task: () => Promise<void>, 9 private readonly _intervalMs: number 10 ) {} 11 12 public start(): void { 13 if (this._timerHandle !== undefined) { 14 throw new Error('Instance is already running'); 15 } 16 17 this._timerHandle = setInterval( 18 (): void => { 19 if (this._lock.isAvailable) { 20 // For simplicity, we assume the task does not throw. 21 this._lock.executeExclusive(this._task); 22 } 23 }, 24 this._intervalMs 25 ); 26 } 27 28 public async stop(): Promise<void> { 29 if (this._timerHandle === undefined) { 30 return; 31 } 32 33 clearInterval(this._timerHandle); 34 this._timerHandle = undefined; 35 await this._lock.waitForAllExistingTasksToComplete(); 36 } 37}
A common example of using locks is the READ-AND-UPDATE scenario, where concurrent reads of the same value can lead to erroneous updates. While such examples are intuitive, they are often less relevant in modern applications due to advancements in databases and external storage solutions. Modern databases, as well as caches like Redis, provide native support for atomic operations. Always prioritize leveraging atomicity in external resources before resorting to in-memory locks.
Consider the following function that increments the number of product views for the last hour in a MongoDB collection. Using two separate operations, this implementation introduces a race condition:
1async function updateViews(products: Collection<IProductSchema>, productID: string): Promise<void> {
2 const product = await products.findOne({ _id: productID }); // Step 1: Read
3 if (!product) return;
4
5 const currentViews = product?.hourlyViews ?? 0;
6 await products.updateOne(
7 { _id: productID },
8 { $set: { hourlyViews: currentViews + 1 } } // Step 2: Update
9 );
10}
The race condition occurs when two or more processes or concurrent tasks (Promises within the same process) execute this function simultaneously, potentially leading to incorrect counter values. This can be mitigated by using MongoDB's atomic $inc
operator, as shown below:
1async function updateViews(products: Collection<IProductSchema>, productID: string): Promise<void> {
2 await products.updateOne(
3 { _id: productID },
4 { $inc: { hourlyViews: 1 } } // Atomic increment
5 );
6}
By combining the read and update into a single atomic operation, the code avoids the need for locks and improves both reliability and performance.
No vulnerabilities found.
No security vulnerabilities found.
Last Day
45.5%
16
Compared to previous day
Last Week
-36%
194
Compared to previous week
Last Month
63%
1,025
Compared to previous month
Last Year
0%
2,406
Compared to previous year