Gathering detailed insights and metrics for lru-cache-for-clusters-as-promised
Gathering detailed insights and metrics for lru-cache-for-clusters-as-promised
Gathering detailed insights and metrics for lru-cache-for-clusters-as-promised
Gathering detailed insights and metrics for lru-cache-for-clusters-as-promised
npm install lru-cache-for-clusters-as-promised
Typescript
Module System
Node Version
NPM Version
JavaScript (99.2%)
Shell (0.8%)
Total Downloads
0
Last Day
0
Last Week
0
Last Month
0
Last Year
0
MIT License
14 Stars
169 Commits
4 Forks
3 Watchers
9 Branches
3 Contributors
Updated on Aug 21, 2023
Latest Version
1.7.4
Package Id
lru-cache-for-clusters-as-promised@1.7.4
Unpacked Size
80.37 kB
Size
19.69 kB
File Count
27
NPM Version
7.21.0
Node Version
16.8.0
Cumulative downloads
Total Downloads
Last Day
0%
NaN
Compared to previous day
Last Week
0%
NaN
Compared to previous week
Last Month
0%
NaN
Compared to previous month
Last Year
0%
NaN
Compared to previous year
LRU Cache for Clusters as Promised provides a cluster-safe lru-cache
via Promises. For environments not using cluster
, the class will provide a Promisified interface to a standard lru-cache
.
Each time you call cluster.fork()
, a new thread is spawned to run your application. When using a load balancer even if a user is assigned a particular IP and port these values are shared between the workers
in your cluster, which means there is no guarantee that the user will use the same workers
between requests. Caching the same objects in multiple threads is not an efficient use of memory.
LRU Cache for Clusters as Promised stores a single lru-cache
on the master
thread which is accessed by the workers
via IPC messages. The same lru-cache
is shared between workers
having a common master
, so no memory is wasted.
When creating a new instance and cluster.isMaster === true
the shared cache is checked based on the and the shared cache is populated, it will be used instead but acted on locally rather than via IPC messages. If the shared cache is not populated a new LRUCache instance is returned.
1npm install --save lru-cache-for-clusters-as-promised
1yarn add lru-cache-for-clusters-as-promised
namespace: string
, default "default"
;
timeout: integer
, default 100
.
Promise
.failsafe: string
, default resolve
.
Promise
will return resolve(undefined)
by default, or with a value of reject
the return will be reject(Error)
.max: number
maxAge: milliseconds
stale: true|false
true
expired items are return before they are removed rather than undefined
prune: false|crontime string
, defaults to false
prune()
on your cache at regular intervals specified in "crontime", for example "*/30 * * * * *" would prune the cache every 30 seconds (See node-cron
patterns for more info). Also works in single threaded environments not using the cluster
module. Passing false
to an existing namespace will disable any jobs that are scheduled.parse: function
, defaults to JSON.parse
LRUCacheForClustersAsPromised
instance and in theory could be different per worker.stringify: function
, defaults to JSON.stringify
! note that
length
anddispose
are missing as it is not possible to passfunctions
via IPC messages.
init(): void
cluster.isMaster === true
to initialize the caches.getInstance(options): Promise<LRUCacheForClustersAsPromised>
LRUCacheForClustersAsPromised
instance once the underlying LRUCache
is guaranteed to exist. Uses the same options
you would pass to the constructor. When constructed synchronously other methods will ensure the underlying cache is created, but this method can be useful from the worker when you plan to interact with the caches directly. Note that this will slow down the construction time on the worker by a few milliseconds while the cache creation is confirmed.getAllCaches(): { key : LRUCache }
LRUCache
caches keyed by namespace. Accessible only when cluster.isMaster === true
, otherwise throws an exception.getCache(): LRUCache
LRUCache
. Accessible only when cluster.isMaster === true
, otherwise throws an exception.set(key, value, maxAge): Promise<void>
maxAge
will cause the value to expire per the stale
value or when prune
d.setObject async (key, object, maxAge): Promise<void>
cache.stringify()
, which defaults to JSON.stringify()
. Use a custom parser like flatted
to cases like circular object references.mSet({ key1: 1, key2: 2, ...}, maxAge): Promise<void>
mSetObjects({ key1: { obj: 1 }, key2: { obj: 2 }, ...}, maxAge): Promise<void>
cache.stringify()
, see cache.setObject()
;get(key): Promise<string | number | null | undefined>
getObject(key): Promise<Object | null | undefined>
cache.parse()
, which defaults to JSON.parse()
. Use a custom parser like flatted
to cases like circular object references.mGet([key1, key2, ...]): Promise<{key:string | number | null | undefined}?>
{ key1: '1', key2: '2' }
.mGetObjects([key1, key2, ...]): Promise<{key:Object | null | undefined}?>
{ key1: '1', key2: '2' }
. Passes the values through cache.parse()
, see cache.getObject()
.peek(key): Promise<string | number | null | undefined>
del(key): Promise<void>
mDel([key1, key2...]): Promise<void>
has(key): Promise<boolean>
incr(key, [amount]): Promise<number>
amount
, which defaults to 1
. More atomic in a clustered environment.decr(key, [amount]): Promise<number>
amount
, which defaults to 1
. More atomic in a clustered environment.reset(): Promise<void>
keys(): Promise<Array<string>>
values(): Promise<Array<string | number>>
dump()
prune(): Promise<void>
length(): Promise<number>
itemCount(): Promise<number>
length()
.max([max]): Promise<number | void>
max
value for the cache.maxAge([maxAge]): Promise<number | void>
maxAge
value for the cache.allowStale([true|false]): Promise<boolean | void>
allowStale
value for the cache (set via stale
in options). The stale()
method is deprecated.execute(command, [arg1, arg2, ...]): Promise<any>
LRUCache
function) on the cache, returns whatever value was returned.Master
1// require the module in your master thread that creates workers to initialize 2require('lru-cache-for-clusters-as-promised').init();
Worker
1// worker code 2const LRUCache = require('lru-cache-for-clusters-as-promised'); 3 4// this is safe on the master and workers. if you need to ensure the underlying 5// LRUCache exists use `await getInstance()` to fetch the promisified cache. 6let cache = new LRUCache({ 7 namespace: 'users', 8 max: 50, 9 stale: false, 10 timeout: 100, 11 failsafe: 'resolve', 12}); 13 14const user = { name: 'user name' }; 15const key = 'userKey'; 16 17// using async/await 18(async function() { 19 // get cache instance asynchronously. this will always be the same underlying cache 20 cache = await LRUCache.getInstance({ /* ...options */ }); 21 22 // set a user for a the key 23 await cache.set(key, user); 24 console.log('set the user to the cache'); 25 26 // get the same user back out of the cache 27 const cachedUser = await cache.get(key); 28 console.log('got the user from cache', cachedUser); 29 30 // check the number of users in the cache 31 const size = await cache.length(); 32 console.log('user cache size/length', size); 33 34 // remove all the items from the cache 35 await cache.reset(); 36 console.log('the user cache is empty'); 37 38 // return user count, this will return the same value as calling length() 39 const itemCount = await cache.itemCount(); 40 console.log('user cache size/itemCount', itemCount); 41}()); 42 43// using thenables 44LRUCache.getInstance({ /* ...options */ }) 45.then((myCache) => 46 myCache.set(key, user) 47 .then(() => 48 myCache.get(key) 49 ) 50)
Use a custom object parser for the cache to handle cases like circular object references that JSON.parse()
and JSON.stringify()
cannot, or use custom revivers, etc.
1const flatted = require('flatted'); 2const LRUCache = require('lru-cache-for-clusters-as-promised'); 3 4const cache = new LRUCache({ 5 namespace: 'circular-objects', 6 max: 50, 7 parse: flatted.parse, 8 stringify: flatted.stringify, 9}); 10 11// create a circular reference 12const a = { b: null }; 13const b = { a }; 14b.a.b = b; 15 16// this will work 17await cache.setObject(1, a); 18 19// this will return an object with the same circular reference via flatted 20const c = await cache.getObject(1); 21if (a == c && a.b === c.b) { 22 console.log('yes they are the same!'); 23}
Clustered cache on master thread for clustered environments
Promisified for non-clustered environments
No vulnerabilities found.
Reason
no binaries found in the repo
Reason
no dangerous workflow patterns detected
Reason
license file detected
Details
Reason
dependency not pinned by hash detected -- score normalized to 5
Details
Reason
0 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 0
Reason
Found 0/30 approved changesets -- score normalized to 0
Reason
no SAST tool detected
Details
Reason
detected GitHub workflow tokens with excessive permissions
Details
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
security policy file not detected
Details
Reason
project is not fuzzed
Details
Reason
branch protection not enabled on development/release branches
Details
Reason
31 existing vulnerabilities detected
Details
Score
Last Scanned on 2025-07-07
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More