Gathering detailed insights and metrics for memoizerific
Gathering detailed insights and metrics for memoizerific
Gathering detailed insights and metrics for memoizerific
Gathering detailed insights and metrics for memoizerific
@storybook/memoizerific
Fast, small, efficient JavaScript memoization lib to memoize JS functions
link-react
A generalized link <a> component that allows client-side navigation while taking into account exceptions.
spa-webserver
Webserver that redirects to root index.html if path is missing for client-side SPA navigation
@silverwind/nano-memoize
memoize
Fast, small, efficient JavaScript memoization lib to memoize JS functions.
npm install memoizerific
Module System
Min. Node Version
Typescript Support
Node Version
NPM Version
109 Stars
170 Commits
12 Forks
5 Watching
2 Branches
3 Contributors
Updated on 07 Aug 2024
JavaScript (100%)
Cumulative downloads
Total Downloads
Last day
-23.3%
896,008
Compared to previous day
Last week
-5.1%
5,633,924
Compared to previous week
Last month
2.7%
24,747,606
Compared to previous month
Last year
12.8%
262,887,707
Compared to previous year
1
Fast (see benchmarks), small (1k min/gzip), efficient, JavaScript memoization lib to memoize JS functions.
Uses JavaScript's Map() object for instant lookups, or a performant polyfill if Map is not available - does not do expensive serialization or string manipulation.
Supports multiple complex arguments. Includes least-recently-used (LRU) caching to maintain only the most recent specified number of results.
Compatible with the browser and nodejs.
Memoization is the process of caching function results so that they may be returned cheaply without re-execution if the function is called again using the same arguments. This is especially useful with the rise of [redux] (https://github.com/rackt/redux), and the push to calculate all derived data on the fly instead of maintaining it in state.
NPM:
npm install memoizerific --save
Or use one of the compiled distributions compatible in any environment (UMD):
1const memoizerific = require('memoizerific'); 2 3// memoize the 50 most recent argument combinations of our function 4const memoized = memoizerific(50)(function(arg1, arg2, arg3) { 5 // many long expensive calls here 6}); 7 8memoized(1, 2, 3); // that took long to process 9memoized(1, 2, 3); // this one was instant! 10 11memoized(2, 3, 4); // expensive again :( 12memoized(2, 3, 4); // this one was cheap!
Or with complex arguments:
1const 2 complexArg1 = { a: { b: { c: 99 }}}, // hairy nested object 3 complexArg2 = [{ z: 1}, { q: [{ x: 3 }]}], // funky objects within arrays within arrays 4 complexArg3 = new Set(); // weird set object, anything goes 5 6memoized(complexArg1, complexArg2, complexArg3); // slow 7memoized(complexArg1, complexArg2, complexArg3); // instant!
There are two required arguments:
limit (required):
the max number of items to cache before the least recently used items are removed.
fn (required):
the function to memoize.
The arguments are specified like this:
1memoizerific(limit)(fn);
Examples:
1// memoize only the last argument combination 2memoizerific(1)(function(arg1, arg2){}); 3 4// memoize the last 10,000 unique argument combinations 5memoizerific(10000)(function(arg1, arg2){}); 6 7// memoize infinity results (not recommended) 8memoizerific(0)(function(arg1){});
The cache works using LRU logic, purging the least recently used results when the limit is reached. For example:
1// memoize 1 result 2const myMemoized = memoizerific(1)(function(arg1) {}); 3 4myMemoized('a'); // function runs, result is cached 5myMemoized('a'); // cached result is returned 6myMemoized('b'); // function runs again, new result is cached, old cached result is purged 7myMemoized('b'); // cached result is returned 8myMemoized('a'); // function runs again
Arguments are compared using strict equality, while taking into account small edge cases like NaN !== NaN (NaN is a valid argument type). A complex object will only trigger a cache hit if it refers to the exact same object in memory, not just another object that has similar properties. For example, the following code will not produce a cache hit even though the objects look the same:
1const myMemoized = memoizerific(1)(function(arg1) {}); 2 3myMemoized({ a: true }); 4myMemoized({ a: true }); // not cached, the two objects are different instances even though they look the same 5
This is because a new object is being created on each invocation, rather than the same object being passed in.
A common scenario is when providing options like this: do({opt1: 10000, op2: 'abc'})
.
If that function were memoized, it would never hit the cache because the options object would be newly created on each invocation.
To get around this you can:
Store complex arguments separately for use later on:
1const do = memoizerific(1)(function(opts) { 2 // function body 3}); 4 5// store the options object 6const opts = { opt1: 10000, opt2: 'abc' }; 7 8do(opts); 9do(opts); // cache hit 10
Destructure complex objects into simple properties then use the simple properties inside the memoized function to re-create the complex object:
1const callDo = memoizerific(1)(function(prop1, prop2) { 2 return do({prop1, prop2}); 3}); 4 5callDo(1000, 'abc'); 6callDo(1000, 'abc'); // cache hit
Meta properties are available for introspection for debugging and informational purposes. They should not be manipulated directly, only read. The following properties are available:
1memoizedFn.limit : The cache limit that was passed in. This will never change. 2memoizedFn.wasMemoized : Returns true if the last invocation was a cache hit, otherwise false. 3memoizedFn.cache : The cache object that stores all the memoized results. 4memoizedFn.lru : The lru object that stores the most recent arguments called. 5
For example:
const myMemoized = memoizerific(1)(function(arg1) {});
myMemoized(1000, 'abc');
console.log(myMemoized.wasMemoized); // false
myMemoized(1000, 'abc');
console.log(myMemoized.wasMemoized); // true
There are many memoization libs available for JavaScript. Some of them have specialized use-cases, such as memoizing file-system access or server async requests. While others, such as this one, tackle the more general case of memoizing standard synchronous functions. Some criteria to look for when shopping for a lib like this:
Two libs with traction that meet the criteria are:
:heavy_check_mark: Memoizee (@medikoo)
:heavy_check_mark: LRU-Memoize (@erikras)
Benchmarks were performed with complex data. Example arguments look like:
1myMemoized( 2 { a: 1, b: [{ c: 2, d: { e: 3 }}] }, // 1st argument 3 [{ x: 'x', q: 'q', }, { b: 8, c: 9 }, { b: 2, c: [{x: 5, y: 3}, {x: 2, y: 7}] }, { b: 8, c: 9 }, { b: 8, c: 9 }], // 2nd argument 4 { z: 'z' }, // 3rd argument 5 ... // 4th, 5th... argument 6); 7
Tests involved calling the memoized functions thousands times using varying numbers of arguments (between 2-8) and with varying amounts of data repetition (more repetion means more cache hits and vice versa).
Following are measurements from 5000 iterations of each combination of number of arguments and variance on firefox 44:
Cache Size | Num Args | Approx. Cache Hits (variance) | LRU-Memoize | Memoizee | Memoizerific | % Faster |
---|---|---|---|---|---|---|
10 | 2 | 99% | 19ms | 31ms | 10ms | 90% |
10 | 2 | 62% | 212ms | 319ms | 172ms | 23% |
10 | 2 | 7% | 579ms | 617ms | 518ms | 12% |
100 | 2 | 99% | 137ms | 37ms | 20ms | 85% |
100 | 2 | 69% | 696ms | 245ms | 161ms | 52% |
100 | 2 | 10% | 1,057ms | 649ms | 527ms | 23% |
500 | 4 | 95% | 476ms | 67ms | 62ms | 8% |
500 | 4 | 36% | 2,642ms | 703ms | 594ms | 18% |
500 | 4 | 11% | 3,619ms | 880ms | 725ms | 21% |
1000 | 8 | 95% | 1,009ms | 52ms | 65ms | 25% |
1000 | 8 | 14% | 10,477ms | 659ms | 635ms | 4% |
1000 | 8 | 1% | 6,943ms | 1,501ms | 1,466ms | 2% |
Cache Size : The maximum number of results to cache.
Num Args : The number of arguments the memoized function accepts, ex. fn(arg1, arg2, arg3) is 3.
Approx. Cache Hits (variance) : How varied the passed in arguments are. If the exact same arguments are always used, the cache would be hit 100% of the time. If the same arguments are never used, the cache would be hit 0% of the time.
% Faster : How much faster the 1st best performer was from the 2nd best performer (not against the worst performer).
LRU-Memoize performed well with few arguments and lots of cache hits, but degraded quickly as the parameters became less favorable. At 4+ arguments it was up to 20x slower, enough to cause material concern.
Memoizee performed reliably with good speed.
Memoizerific was fastest by about 30% with predictable decreases in performance as tests became more challenging.
Released under an MIT license.
Like it, star it.
No vulnerabilities found.
Reason
no binaries found in the repo
Reason
0 existing vulnerabilities detected
Reason
license file detected
Details
Reason
Found 1/29 approved changesets -- score normalized to 0
Reason
0 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 0
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
security policy file not detected
Details
Reason
project is not fuzzed
Details
Reason
branch protection not enabled on development/release branches
Details
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
Score
Last Scanned on 2024-11-18
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More