Gathering detailed insights and metrics for cacheable
Gathering detailed insights and metrics for cacheable
Gathering detailed insights and metrics for cacheable
Gathering detailed insights and metrics for cacheable
cacheable-request
Wrap native HTTP requests with RFC compliant cache support
cacheable-lookup
A cacheable dns.lookup(…) that respects TTL
workbox-cacheable-response
This library takes a Response object and determines whether it's cacheable based on a specific configuration.
@types/cacheable-request
Stub TypeScript definitions entry for cacheable-request, which provides its own types definitions
a robust, scalable, and maintained set of caching packages
npm install cacheable
Typescript
Module System
Node Version
NPM Version
99.3
Supply Chain
99.6
Quality
92.9
Maintenance
100
Vulnerability
100
License
TypeScript (99.31%)
JavaScript (0.47%)
CSS (0.22%)
Total Downloads
25,911,115
Last Day
79,686
Last Week
1,726,962
Last Month
7,358,195
Last Year
25,885,914
MIT License
1,834 Stars
1,555 Commits
193 Forks
10 Watchers
1 Branches
91 Contributors
Updated on Jul 01, 2025
Minified
Minified + Gzipped
Latest Version
1.10.1
Package Id
cacheable@1.10.1
Unpacked Size
202.08 kB
Size
39.28 kB
File Count
7
NPM Version
11.4.1
Node Version
20.17.0
Published on
Jun 30, 2025
Cumulative downloads
Total Downloads
Last Day
2.5%
79,686
Compared to previous day
Last Week
-6.8%
1,726,962
Compared to previous week
Last Month
9.4%
7,358,195
Compared to previous month
Last Year
4,307,040.4%
25,885,914
Compared to previous year
High Performance Layer 1 / Layer 2 Caching with Keyv Storage
cacheable
is a high performance layer 1 / layer 2 caching engine that is focused on distributed caching with enterprise features such as CacheSync
(coming soon). It is built on top of the robust storage engine Keyv and provides a simple API to cache and retrieve data.
CacheableMemory
(1m = 60000) (1h = 3600000) (1d = 86400000)
cacheable
is primarily used as an extension to you caching engine with a robust storage backend Keyv, Memonization (Wrap), Hooks, Events, and Statistics.
1npm install cacheable
1import { Cacheable } from 'cacheable'; 2 3const cacheable = new Cacheable(); 4await cacheable.set('key', 'value', 1000); 5const value = await cacheable.get('key');
This is a basic example where you are only using the in-memory storage engine. To enable layer 1 and layer 2 caching you can use the secondary
property in the options:
1import { Cacheable } from 'cacheable'; 2import KeyvRedis from '@keyv/redis'; 3 4const secondary = new KeyvRedis('redis://user:pass@localhost:6379'); 5const cache = new Cacheable({secondary});
In this example, the primary store we will use lru-cache
and the secondary store is Redis. You can also set multiple stores in the options:
1import { Cacheable } from 'cacheable'; 2import { Keyv } from 'keyv'; 3import KeyvRedis from '@keyv/redis'; 4import { LRUCache } from 'lru-cache' 5 6const primary = new Keyv({store: new LRUCache()}); 7const secondary = new KeyvRedis('redis://user:pass@localhost:6379'); 8const cache = new Cacheable({primary, secondary});
This is a more advanced example and not needed for most use cases.
The following hooks are available for you to extend the functionality of cacheable
via CacheableHooks
enum:
BEFORE_SET
: This is called before the set()
method is called.AFTER_SET
: This is called after the set()
method is called.BEFORE_SET_MANY
: This is called before the setMany()
method is called.AFTER_SET_MANY
: This is called after the setMany()
method is called.BEFORE_GET
: This is called before the get()
method is called.AFTER_GET
: This is called after the get()
method is called.BEFORE_GET_MANY
: This is called before the getMany()
method is called.AFTER_GET_MANY
: This is called after the getMany()
method is called.BEFORE_SECONDARY_SETS_PRIMARY
: This is called when the secondary store sets the value in the primary store.An example of how to use these hooks:
1import { Cacheable, CacheableHooks } from 'cacheable'; 2 3const cacheable = new Cacheable(); 4cacheable.onHook(CacheableHooks.BEFORE_SET, (data) => { 5 console.log(`before set: ${data.key} ${data.value}`); 6});
Here is an example of how to use BEFORE_SECONDARY_SETS_PRIMARY
hook:
1import { Cacheable, CacheableHooks } from 'cacheable'; 2import KeyvRedis from '@keyv/redis'; 3const secondary = new KeyvRedis('redis://user:pass@localhost:6379'); 4const cache = new Cacheable({secondary}); 5cache.onHook(CacheableHooks.BEFORE_SECONDARY_SETS_PRIMARY, (data) => { 6 console.log(`before secondary sets primary: ${data.key} ${data.value} ${data.ttl}`); 7});
This is called when the secondary store sets the value in the primary store. This is useful if you want to do something before the value is set in the primary store such as manipulating the ttl or the value.
cacheable
is built as a layer 1 and layer 2 caching engine by default. The purpose is to have your layer 1 be fast and your layer 2 be more persistent. The primary store is the layer 1 cache and the secondary store is the layer 2 cache. By adding the secondary store you are enabling layer 2 caching. By default the operations are blocking but fault tolerant:
Setting Data
: Sets the value in the primary store and then the secondary store.Getting Data
: Gets the value from the primary if the value does not exist it will get it from the secondary store and set it in the primary store.Deleting Data
: Deletes the value from the primary store and secondary store at the same time waiting for both to respond.Clearing Data
: Clears the primary store and secondary store at the same time waiting for both to respond.When Getting Data
if the value does not exist in the primary store it will try to get it from the secondary store. If the secondary store returns the value it will set it in the primary store. Because we use TTL Propagation the value will be set in the primary store with the TTL of the secondary store unless the time to live (TTL) is greater than the primary store which will then use the TTL of the primary store. An example of this is:
1import { Cacheable } from 'cacheable'; 2import KeyvRedis from '@keyv/redis'; 3const secondary = new KeyvRedis('redis://user:pass@localhost:6379', { ttl: 1000 }); 4const cache = new Cacheable({secondary, ttl: 100}); 5 6await cache.set('key', 'value'); // sets the value in the primary store with a ttl of 100 ms and secondary store with a ttl of 1000 ms 7 8await sleep(500); // wait for .5 seconds 9 10const value = await cache.get('key'); // gets the value from the secondary store and now sets the value in the primary store with a ttl of 500 ms which is what is left from the secondary store
In this example the primary store has a ttl of 100 ms
and the secondary store has a ttl of 1000 ms
. Because the ttl is greater in the secondary store it will default to setting ttl value in the primary store.
1import { Cacheable } from 'cacheable'; 2import {Keyv} from 'keyv'; 3import KeyvRedis from '@keyv/redis'; 4const primary = new Keyv({ ttl: 200 }); 5const secondary = new KeyvRedis('redis://user:pass@localhost:6379', { ttl: 1000 }); 6const cache = new Cacheable({primary, secondary}); 7 8await cache.set('key', 'value'); // sets the value in the primary store with a ttl of 100 ms and secondary store with a ttl of 1000 ms 9 10await sleep(200); // wait for .2 seconds 11 12const value = await cache.get('key'); // gets the value from the secondary store and now sets the value in the primary store with a ttl of 200 ms which is what the primary store is set with
Cacheable TTL propagation is a feature that allows you to set a time to live (TTL) for the cache. By default the TTL is set in the following order:
ttl = set at the function ?? storage adapter ttl ?? cacheable ttl
This means that if you set a TTL at the function level it will override the storage adapter TTL and the cacheable TTL. If you do not set a TTL at the function level it will use the storage adapter TTL and then the cacheable TTL. If you do not set a TTL at all it will use the default TTL of undefined
which is disabled.
By default Cacheable
and CacheableMemory
the ttl
is in milliseconds but you can use shorthand for the time to live. Here are the following shorthand values:
ms
: Milliseconds such as (1ms = 1)s
: Seconds such as (1s = 1000)m
: Minutes such as (1m = 60000)h
or hr
: Hours such as (1h = 3600000)d
: Days such as (1d = 86400000)Here is an example of how to use the shorthand for the ttl
:
1import { Cacheable } from 'cacheable'; 2const cache = new Cacheable({ ttl: '15m' }); //sets the default ttl to 15 minutes (900000 ms) 3cache.set('key', 'value', '1h'); //sets the ttl to 1 hour (3600000 ms) and overrides the default
if you want to disable the ttl
you can set it to 0
or undefined
:
1import { Cacheable } from 'cacheable'; 2const cache = new Cacheable({ ttl: 0 }); //sets the default ttl to 0 which is disabled 3cache.set('key', 'value', 0); //sets the ttl to 0 which is disabled
If you set the ttl to anything below 0
or undefined
it will disable the ttl for the cache and the value that returns will be undefined
. With no ttl set the value will be stored indefinitely
.
1import { Cacheable } from 'cacheable'; 2const cache = new Cacheable({ ttl: 0 }); //sets the default ttl to 0 which is disabled 3console.log(cache.ttl); // undefined 4cache.ttl = '1h'; // sets the default ttl to 1 hour (3600000 ms) 5console.log(cache.ttl); // '1h' 6cache.ttl = -1; // sets the default ttl to 0 which is disabled 7console.log(cache.ttl); // undefined
The get
and getMany
methods support a raw
option, which returns the full stored metadata (StoredDataRaw<T>
) instead of just the value:
1import { Cacheable } from 'cacheable'; 2 3const cache = new Cacheable(); 4 5// store a value 6await cache.set('user:1', { name: 'Alice' }); 7 8// default: only the value 9const user = await cache.get<{ name: string }>('user:1'); 10console.log(user); // { name: 'Alice' } 11 12// with raw: full record including expiration 13const raw = await cache.get<{ name: string }>('user:1', { raw: true }); 14console.log(raw.value); // { name: 'Alice' } 15console.log(raw.expires); // e.g. 1677628495000 or null
1// getMany with raw option 2await cache.set('a', 1); 3await cache.set('b', 2); 4 5const raws = await cache.getMany<number>(['a', 'b'], { raw: true }); 6raws.forEach((entry, idx) => { 7 console.log(`key=${['a','b'][idx]}, value=${entry?.value}, expires=${entry?.expires}`); 8});
If you want your layer 2 (secondary) store to be non-blocking you can set the nonBlocking
property to true
in the options. This will make the secondary store non-blocking and will not wait for the secondary store to respond on setting data
, deleting data
, or clearing data
. This is useful if you want to have a faster response time and not wait for the secondary store to respond.
The getOrSet
method provides a convenient way to implement the cache-aside pattern. It attempts to retrieve a value
from cache, and if not found, calls the provided function to compute the value and store it in cache before returning
it.
1import { Cacheable } from 'cacheable'; 2 3// Create a new Cacheable instance 4const cache = new Cacheable(); 5 6// Use getOrSet to fetch user data 7async function getUserData(userId: string) { 8 return await cache.getOrSet( 9 `user:${userId}`, 10 async () => { 11 // This function only runs if the data isn't in the cache 12 console.log('Fetching user from database...'); 13 // Simulate database fetch 14 return { id: userId, name: 'John Doe', email: 'john@example.com' }; 15 }, 16 { ttl: '30m' } // Cache for 30 minutes 17 ); 18} 19 20// First call - will fetch from "database" 21const user1 = await getUserData('123'); 22console.log(user1); // { id: '123', name: 'John Doe', email: 'john@example.com' } 23 24// Second call - will retrieve from cache 25const user2 = await getUserData('123'); 26console.log(user2); // Same data, but retrieved from cache
1import { Cacheable } from 'cacheable'; 2import {KeyvRedis} from '@keyv/redis'; 3 4const secondary = new KeyvRedis('redis://user:pass@localhost:6379'); 5const cache = new Cacheable({secondary, nonBlocking: true});
cacheable
has a feature called CacheSync
that is coming soon. This feature will allow you to have distributed caching with Pub/Sub. This will allow you to have multiple instances of cacheable
running and when a value is set, deleted, or cleared it will update all instances of cacheable
with the same value. Current plan is to support the following:
This feature should be live by end of year.
The following options are available for you to configure cacheable
:
primary
: The primary store for the cache (layer 1) defaults to in-memory by Keyv.secondary
: The secondary store for the cache (layer 2) usually a persistent cache by Keyv.nonBlocking
: If the secondary store is non-blocking. Default is false
.stats
: To enable statistics for this instance. Default is false
.ttl
: The default time to live for the cache in milliseconds. Default is undefined
which is disabled.namespace
: The namespace for the cache. Default is undefined
.If you want to enable statistics for your instance you can set the .stats.enabled
property to true
in the options. This will enable statistics for your instance and you can get the statistics by calling the stats
property. Here are the following property statistics:
hits
: The number of hits in the cache.misses
: The number of misses in the cache.sets
: The number of sets in the cache.deletes
: The number of deletes in the cache.clears
: The number of clears in the cache.errors
: The number of errors in the cache.count
: The number of keys in the cache.vsize
: The estimated byte size of the values in the cache.ksize
: The estimated byte size of the keys in the cache.You can clear / reset the stats by calling the .stats.reset()
method.
This does not enable statistics for your layer 2 cache as that is a distributed cache.
set(key, value, ttl?)
: Sets a value in the cache.setMany([{key, value, ttl?}])
: Sets multiple values in the cache.get(key)
: Gets a value from the cache.get(key, { raw: true })
: Gets a raw value from the cache.getMany([keys])
: Gets multiple values from the cache.getMany([keys], { raw: true })
: Gets multiple raw values from the cache.has(key)
: Checks if a value exists in the cache.hasMany([keys])
: Checks if multiple values exist in the cache.take(key)
: Takes a value from the cache and deletes it.takeMany([keys])
: Takes multiple values from the cache and deletes them.delete(key)
: Deletes a value from the cache.deleteMany([keys])
: Deletes multiple values from the cache.clear()
: Clears the cache stores. Be careful with this as it will clear both layer 1 and layer 2.wrap(function, WrapOptions)
: Wraps an async
function in a cache.getOrSet(GetOrSetKey, valueFunction, GetOrSetFunctionOptions)
: Gets a value from cache or sets it if not found using the provided function.disconnect()
: Disconnects from the cache stores.onHook(hook, callback)
: Sets a hook.removeHook(hook)
: Removes a hook.on(event, callback)
: Listens for an event.removeListener(event, callback)
: Removes a listener.hash(object: any, algorithm = 'sha256'): string
: Hashes an object with the algorithm. Default is sha256
.primary
: The primary store for the cache (layer 1) defaults to in-memory by Keyv.secondary
: The secondary store for the cache (layer 2) usually a persistent cache by Keyv.namespace
: The namespace for the cache. Default is undefined
. This will set the namespace for the primary and secondary stores.nonBlocking
: If the secondary store is non-blocking. Default is false
.stats
: The statistics for this instance which includes hits
, misses
, sets
, deletes
, clears
, errors
, count
, vsize
, ksize
.cacheable
comes with a built-in in-memory cache called CacheableMemory
. This is a simple in-memory cache that is used as the primary store for cacheable
. You can use this as a standalone cache or as a primary store for cacheable
. Here is an example of how to use CacheableMemory
:
1import { CacheableMemory } from 'cacheable'; 2const options = { 3 ttl: '1h', // 1 hour 4 useClones: true, // use clones for the values (default is true) 5 lruSize: 1000, // the size of the LRU cache (default is 0 which is unlimited) 6} 7const cache = new CacheableMemory(options); 8cache.set('key', 'value'); 9const value = cache.get('key'); // value
You can use CacheableMemory
as a standalone cache or as a primary store for cacheable
. You can also set the useClones
property to false
if you want to use the same reference for the values. This is useful if you are using large objects and want to save memory. The lruSize
property is the size of the LRU cache and is set to 0
by default which is unlimited. When setting the lruSize
property it will limit the number of keys in the cache.
This simple in-memory cache uses multiple Map objects and a with expiration
and lru
policies if set to manage the in memory cache at scale.
By default we use lazy expiration deletion which means on get
and getMany
type functions we look if it is expired and then delete it. If you want to have a more aggressive expiration policy you can set the checkInterval
property to a value greater than 0
which will check for expired keys at the interval you set.
Here are some of the main features of CacheableMemory
:
16,777,216 (2^24) keys
limit of a single Map
via hashStoreSize
. Default is 16
Map objects.lruSize
. Limit to 16,777,216 (2^24) keys
total.checkInterval
.Wrap
feature to memoize sync
and async
functions with stampede protection.setMany
, getMany
, deleteMany
, and takeMany
.raw
data retrieval with getRaw
and getManyRaw
methods to get the full metadata of the cache entry.CacheableMemory
uses Map
objects to store the keys and values. To make this scale past the 16,777,216 (2^24) keys
limit of a single Map
we use a hash to balance the data across multiple Map
objects. This is done by hashing the key and using the hash to determine which Map
object to use. The default hashing algorithm is djb2Hash
but you can change it by setting the storeHashAlgorithm
property in the options. By default we set the amount of Map
objects to 16
.
NOTE: if you are using the LRU cache feature the lruSize
no matter how many Map
objects you have it will be limited to the 16,777,216 (2^24) keys
limit of a single Map
object. This is because we use a double linked list to manage the LRU cache and it is not possible to have more than 16,777,216 (2^24) keys
in a single Map
object.
Here is an example of how to set the number of Map
objects and the hashing algorithm:
1import { CacheableMemory } from 'cacheable'; 2const cache = new CacheableMemory({ 3 storeSize: 32, // set the number of Map objects to 32 4}); 5cache.set('key', 'value'); 6const value = cache.get('key'); // value
Here is an example of how to use the storeHashAlgorithm
property:
1import { CacheableMemory } from 'cacheable'; 2const cache = new CacheableMemory({ storeHashAlgorithm: 'sha256' }); 3cache.set('key', 'value'); 4const value = cache.get('key'); // value
If you want to provide your own hashing function you can set the storeHashAlgorithm
property to a function that takes an object and returns a number
that is in the range of the amount of Map
stores you have.
1import { CacheableMemory } from 'cacheable'; 2/** 3 * Custom hash function that takes a key and the size of the store 4 * and returns a number between 0 and storeHashSize - 1. 5 * @param {string} key - The key to hash. 6 * @param {number} storeHashSize - The size of the store (number of Map objects). 7 * @returns {number} - A number between 0 and storeHashSize - 1. 8 */ 9const customHash = (key, storeHashSize) => { 10 // custom hashing logic 11 return key.length % storeHashSize; // returns a number between 0 and 31 for 32 Map objects 12}; 13const cache = new CacheableMemory({ storeHashAlgorithm: customHash, storeSize: 32 }); 14cache.set('key', 'value'); 15const value = cache.get('key'); // value
You can enable the LRU (Least Recently Used) feature in CacheableMemory
by setting the lruSize
property in the options. This will limit the number of keys in the cache to the size you set. When the cache reaches the limit it will remove the least recently used keys from the cache. This is useful if you want to limit the memory usage of the cache.
When you set the lruSize
we use a double linked list to manage the LRU cache and also set the hashStoreSize
to 1
which means we will only use a single Map
object for the LRU cache. This is because the LRU cache is managed by the double linked list and it is not possible to have more than 16,777,216 (2^24) keys
in a single Map
object.
1import { CacheableMemory } from 'cacheable'; 2const cache = new CacheableMemory({ lruSize: 1 }); // sets the LRU cache size to 1000 keys and hashStoreSize to 1 3cache.set('key1', 'value1'); 4cache.set('key2', 'value2'); 5const value1 = cache.get('key1'); 6console.log(value1); // undefined if the cache is full and key1 is the least recently used 7const value2 = cache.get('key2'); 8console.log(value2); // value2 if key2 is still in the cache 9console.log(cache.size()); // 1
NOTE: if you set the lruSize
property to 0
after it was enabled it will disable the LRU cache feature and will not limit the number of keys in the cache. This will remove the 16,777,216 (2^24) keys
limit of a single Map
object and will allow you to store more keys in the cache.
Our goal with cacheable
and CacheableMemory
is to provide a high performance caching engine that is simple to use and has a robust API. We test it against other cacheing engines such that are less feature rich to make sure there is little difference. Here are some of the benchmarks we have run:
Memory Benchmark Results:
name | summary | ops/sec | time/op | margin | samples |
---|---|---|---|---|---|
Map (v22) - set / get | 🥇 | 117K | 9µs | ±1.29% | 110K |
Cacheable Memory (v1.10.0) - set / get | -1.3% | 116K | 9µs | ±0.77% | 110K |
Node Cache - set / get | -4.1% | 112K | 9µs | ±1.34% | 107K |
bentocache (v1.4.0) - set / get | -45% | 65K | 17µs | ±1.10% | 100K |
Memory LRU Benchmark Results:
name | summary | ops/sec | time/op | margin | samples |
---|---|---|---|---|---|
quick-lru (v7.0.1) - set / get | 🥇 | 118K | 9µs | ±0.85% | 112K |
Map (v22) - set / get | -0.56% | 117K | 9µs | ±1.35% | 110K |
lru.min (v1.1.2) - set / get | -1.7% | 116K | 9µs | ±0.90% | 110K |
Cacheable Memory (v1.10.0) - set / get | -3.3% | 114K | 9µs | ±1.16% | 108K |
As you can see from the benchmarks CacheableMemory
is on par with other caching engines such as Map
, Node Cache
, and bentocache
. We have also tested it against other LRU caching engines such as quick-lru
and lru.min
and it performs well against them too.
ttl
: The time to live for the cache in milliseconds. Default is undefined
which is means indefinitely.useClones
: If the cache should use clones for the values. Default is true
.lruSize
: The size of the LRU cache. Default is 0
which is unlimited.checkInterval
: The interval to check for expired keys in milliseconds. Default is 0
which is disabled.storeHashSize
: The number of Map
objects to use for the cache. Default is 16
.storeHashAlgorithm
: The hashing algorithm to use for the cache. Default is djb2Hash
.set(key, value, ttl?)
: Sets a value in the cache.setMany([{key, value, ttl?}])
: Sets multiple values in the cache from CacheableItem
.get(key)
: Gets a value from the cache.getMany([keys])
: Gets multiple values from the cache.getRaw(key)
: Gets a value from the cache as CacheableStoreItem
.getManyRaw([keys])
: Gets multiple values from the cache as CacheableStoreItem
.has(key)
: Checks if a value exists in the cache.hasMany([keys])
: Checks if multiple values exist in the cache.delete(key)
: Deletes a value from the cache.deleteMany([keys])
: Deletes multiple values from the cache.take(key)
: Takes a value from the cache and deletes it.takeMany([keys])
: Takes multiple values from the cache and deletes them.wrap(function, WrapSyncOptions)
: Wraps a sync
function in a cache.clear()
: Clears the cache.ttl
: The default time to live for the cache in milliseconds. Default is undefined
which is disabled.useClones
: If the cache should use clones for the values. Default is true
.lruSize
: The size of the LRU cache. Default is 0
which is unlimited.size
: The number of keys in the cache.checkInterval
: The interval to check for expired keys in milliseconds. Default is 0
which is disabled.storeHashSize
: The number of Map
objects to use for the cache. Default is 16
.storeHashAlgorithm
: The hashing algorithm to use for the cache. Default is djb2Hash
.keys
: Get the keys in the cache. Not able to be set.items
: Get the items in the cache as CacheableStoreItem
example { key, value, expires? }
.store
: The hash store for the cache which is an array of Map
objects.checkExpired()
: Checks for expired keys in the cache. This is used by the checkInterval
property.startIntervalCheck()
: Starts the interval check for expired keys if checkInterval
is above 0 ms.stopIntervalCheck()
: Stops the interval check for expired keys.cacheable
comes with a built-in storage adapter for Keyv called KeyvCacheableMemory
. This takes CacheableMemory
and creates a storage adapter for Keyv. This is useful if you want to use CacheableMemory
as a storage adapter for Keyv. Here is an example of how to use KeyvCacheableMemory
:
1import { Keyv } from 'keyv'; 2import { KeyvCacheableMemory } from 'cacheable'; 3 4const keyv = new Keyv({ store: new KeyvCacheableMemory() }); 5await keyv.set('foo', 'bar'); 6const value = await keyv.get('foo'); 7console.log(value); // bar
Cacheable
and CacheableMemory
has a feature called wrap
that allows you to wrap a function in a cache. This is useful for memoization and caching the results of a function. You can wrap a sync
or async
function in a cache. Here is an example of how to use the wrap
function:
1import { Cacheable } from 'cacheable'; 2const asyncFunction = async (value: number) => { 3 return Math.random() * value; 4}; 5 6const cache = new Cacheable(); 7const options = { 8 ttl: '1h', // 1 hour 9 keyPrefix: 'p1', // key prefix. This is used if you have multiple functions and need to set a unique prefix. 10} 11const wrappedFunction = cache.wrap(asyncFunction, options); 12console.log(await wrappedFunction(2)); // 4 13console.log(await wrappedFunction(2)); // 4 from cache
With Cacheable
we have also included stampede protection so that a Promise
based call will only be called once if multiple requests of the same are executed at the same time. Here is an example of how to test for stampede protection:
1import { Cacheable } from 'cacheable'; 2const asyncFunction = async (value: number) => { 3 return value; 4}; 5 6const cache = new Cacheable(); 7const options = { 8 ttl: '1h', // 1 hour 9 keyPrefix: 'p1', // key prefix. This is used if you have multiple functions and need to set a unique prefix. 10} 11 12const wrappedFunction = cache.wrap(asyncFunction, options); 13const promises = []; 14for (let i = 0; i < 10; i++) { 15 promises.push(wrappedFunction(i)); 16} 17 18const results = await Promise.all(promises); // all results should be the same 19 20console.log(results); // [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
In this example we are wrapping an async
function in a cache with a ttl
of 1 hour
. This will cache the result of the function for 1 hour
and then expire the value. You can also wrap a sync
function in a cache:
1import { CacheableMemory } from 'cacheable'; 2const syncFunction = (value: number) => { 3 return value * 2; 4}; 5 6const cache = new CacheableMemory(); 7const wrappedFunction = cache.wrap(syncFunction, { ttl: '1h', key: 'syncFunction' }); 8console.log(wrappedFunction(2)); // 4 9console.log(wrappedFunction(2)); // 4 from cache
In this example we are wrapping a sync
function in a cache with a ttl
of 1 hour
. This will cache the result of the function for 1 hour
and then expire the value. You can also set the key
property in the wrap()
options to set a custom key for the cache.
When an error occurs in the function it will not cache the value and will return the error. This is useful if you want to cache the results of a function but not cache the error. If you want it to cache the error you can set the cacheError
property to true
in the wrap()
options. This is disabled by default.
1import { CacheableMemory } from 'cacheable'; 2const syncFunction = (value: number) => { 3 throw new Error('error'); 4}; 5 6const cache = new CacheableMemory(); 7const wrappedFunction = cache.wrap(syncFunction, { ttl: '1h', key: 'syncFunction', cacheError: true }); 8console.log(wrappedFunction()); // error 9console.log(wrappedFunction()); // error from cache
If you would like to generate your own key for the wrapped function you can set the createKey
property in the wrap()
options. This is useful if you want to generate a key based on the arguments of the function or any other criteria.
1 const cache = new Cacheable(); 2 const options: WrapOptions = { 3 cache, 4 keyPrefix: 'test', 5 createKey: (function_, arguments_, options: WrapOptions) => `customKey:${options?.keyPrefix}:${arguments_[0]}`, 6 }; 7 8 const wrapped = wrap((argument: string) => `Result for ${argument}`, options); 9 10 const result1 = await wrapped('arg1'); 11 const result2 = await wrapped('arg1'); // Should hit the cache 12 13 console.log(result1); // Result for arg1 14 console.log(result2); // Result for arg1 (from cache)
We will pass in the function
that is being wrapped, the arguments
passed to the function, and the options
used to wrap the function. You can then use these to generate a custom key for the cache.
The getOrSet
method provides a convenient way to implement the cache-aside pattern. It attempts to retrieve a value from cache, and if not found, calls the provided function to compute the value and store it in cache before returning it. Here are the options:
1export type GetOrSetFunctionOptions = { 2 ttl?: number | string; 3 cacheErrors?: boolean; 4 throwErrors?: boolean; 5};
Here is an example of how to use the getOrSet
method:
1import { Cacheable } from 'cacheable'; 2const cache = new Cacheable(); 3// Use getOrSet to fetch user data 4const function_ = async () => Math.random() * 100; 5const value = await cache.getOrSet('randomValue', function_, { ttl: '1h' }); 6console.log(value); // e.g. 42.123456789
You can also use a function to compute the key for the function:
1import { Cacheable, GetOrSetOptions } from 'cacheable'; 2const cache = new Cacheable(); 3 4// Function to generate a key based on options 5const generateKey = (options?: GetOrSetOptions) => { 6 return `custom_key_:${options?.cacheId || 'default'}`; 7}; 8 9const function_ = async () => Math.random() * 100; 10const value = await cache.getOrSet(generateKey(), function_, { ttl: '1h' });
You can contribute by forking the repo and submitting a pull request. Please make sure to add tests and update the documentation. To learn more about how to contribute go to our main README https://github.com/jaredwray/cacheable. This will talk about how to Open a Pull Request
, Ask a Question
, or Post an Issue
.
No vulnerabilities found.
No security vulnerabilities found.