Gathering detailed insights and metrics for mitata
Gathering detailed insights and metrics for mitata
npm install mitata
Typescript
Module System
Node Version
NPM Version
JavaScript (46.8%)
Zig (31.67%)
C++ (17.1%)
TypeScript (4.43%)
Verify real, reachable, and deliverable emails with instant MX records, SMTP checks, and disposable email detection.
Total Downloads
1,513,151
Last Day
4,037
Last Week
42,810
Last Month
192,991
Last Year
1,313,234
MIT License
1,778 Stars
73 Commits
24 Forks
5 Watchers
1 Branches
6 Contributors
Updated on Mar 15, 2025
Minified
Minified + Gzipped
Latest Version
1.0.34
Package Id
mitata@1.0.34
Unpacked Size
131.57 kB
Size
29.02 kB
File Count
8
NPM Version
10.9.2
Node Version
23.5.0
Published on
Feb 04, 2025
Cumulative downloads
Total Downloads
Last Day
-18%
4,037
Compared to previous day
Last Week
4.8%
42,810
Compared to previous week
Last Month
-7.8%
192,991
Compared to previous month
Last Year
699.1%
1,313,234
Compared to previous year
No dependencies detected.
bun add mitata
npm install mitata
try mitata in browser with ai assistant at https://bolt.new/~/mitata
node --expose-gc ...
)javascript | c++ single header |
---|---|
|
|
1import { run } from 'mitata'; 2 3await run({ format: 'json' }) // output json 4await run({ filter: /new Array.*/ }) // only run benchmarks that match regex filter 5await run({ throw: true }); // will immediately throw instead of handling error quietly 6await run({ format: { mitata: { name: 'fixed' } } }); // benchmarks name column is fixed length 7 8// c++ 9auto stats = runner.run({ .colors = true, .format = "json", .filter = std::regex(".*") });
By default, on runtimes with exposed manual gc (like bun or node with --expose-gc
), mitata runs garbage collection once after each benchmark warmup.
This behavior can be customized using gc(mode)
method on benchmarks:
1bench('lots of allocations', () => { 2 Array.from({ length: 1024 }, () => Array.from({ length: 1024 }, () => new Array(1024))); 3}) 4 // mode: false | 'once' (default) | 'inner' 5 6 // once mode runs gc after warmup 7 // inner mode runs gc after warmup and before each (batch-)iteration 8 .gc('inner');
For runtimes that provide manual garbage collection or offer access to javscript vm heap usage metrics, additional row will be shown with garbage collection timings or/and estimated heap usage.
1------------------------------------------- ------------------------------- 2new Array(512) 509.42 ns/iter 536.53 ns ▅▃█ ▂ 3 (449.52 ns … 632.54 ns) 609.34 ns ███ ▃▅▆█▇ 4 ( 0.00 b … 24.00 kb) 1.61 kb ▆████▅▄██████▅▅▅█▅▄▂▂ 5 6Array.from(512) 1.29 µs/iter 1.30 µs ▂▆█ 7 (1.27 µs … 1.48 µs) 1.40 µs ▂███▇▆▃▃▂▁▁▂▁▁▁▁▁▁▁▁▁ 8 gc(457.25 µs … 760.54 µs) 512.32 b ( 0.00 b… 84.00 kb)
Out of box mitata can detect engine/runtime it's running on and fall back to using alternative non-standard I/O functions. If your engine or runtime is missing support, open an issue or pr requesting for support.
1$ xs bench.mjs 2$ quickjs bench.mjs 3$ d8 --expose-gc bench.mjs 4$ spidermonkey -m bench.mjs 5$ graaljs --js.timer-resolution=1 bench.mjs 6$ /System/Library/Frameworks/JavaScriptCore.framework/Versions/Current/Helpers/jsc bench.mjs
1// bench.mjs 2 3import { print } from './src/lib.mjs'; 4import { run, bench } from './src/main.mjs'; // git clone 5import { run, bench } from './node_modules/mitata/src/main.mjs'; // npm install 6 7print('hello world'); // works on every engine
With other benchmarking libraries, often it's quite hard to easily make benchmarks that go over a range or run the same function with different arguments without writing spaghetti code, but now with mitata converting your benchmark to use arguments is just a function call away.
1import { bench } from 'mitata'; 2 3bench(function* look_mom_no_spaghetti(state) { 4 const len = state.get('len'); 5 const len2 = state.get('len2'); 6 yield () => new Array(len * len2); 7}) 8 9.args('len', [1, 2, 3]) 10.range('len', 1, 1024) // 1, 8, 64, 512... 11.dense_range('len', 1, 100) // 1, 2, 3 ... 99, 100 12.args({ len: [1, 2, 3], len2: ['4', '5', '6'] }) // every possible combination
For cases where you need unique copy of value for each iteration, mitata supports creating computed parameters that do not count towards benchmark results (note: there is no guarantee of recompute time, order, or call count):
1bench('deleting $keys from object', function* (state) { 2 const keys = state.get('keys'); 3 4 const obj = {}; 5 for (let i = 0; i < keys; i++) obj[i] = i; 6 7 yield { 8 [0]() { 9 return { ...obj }; 10 }, 11 12 bench(p0) { 13 for (let i = 0; i < keys; i++) delete p0[i]; 14 }, 15 }; 16}).args('keys', [1, 10, 100]);
concurrency
option enables transparent concurrent execution of asynchronous benchmark, providing insights into:
(note: concurrent benchmarks may have higher variance due to scheduling, contention, event loop and async overhead)
1bench('sleepAsync(1000) x $concurrency', function* () { 2 // concurrency inherited from arguments 3 yield async () => await sleepAsync(1000); 4}).args('concurrency', [1, 5, 10]); 5 6bench('sleepAsync(1000) x 5', function* () { 7 yield { 8 // concurrency is set manually 9 concurrency: 5, 10 11 async bench() { 12 await sleepAsync(1000); 13 }, 14 }; 15});
bun add @mitata/counters
npm install @mitata/counters
supported on: macos (apple silicon) | linux (amd64, aarch64)
linux:
/proc/sys/kernel/perf_event_paranoid
has to be set to 2
or lowermacos:
By installing @mitata/counters
package you can enable collection and displaying of hardware counters for benchmarks.
1------------------------------------------- ------------------------------- 2new Array(1024) 332.67 ns/iter 337.90 ns █ 3 (295.63 ns … 507.93 ns) 455.66 ns ▂██▇▄▂▂▂▁▂▁▃▃▃▂▂▁▁▁▁▁ 4 2.41 ipc ( 48.66% stalls) 37.89% L1 data cache 5 1.11k cycles 2.69k instructions 33.09% retired LD/ST ( 888.96) 6 7new URL(google.com) 246.40 ns/iter 245.10 ns █▃ 8 (206.01 ns … 841.23 ns) 302.39 ns ▁▁▁▁▂███▇▃▂▂▂▂▂▂▂▁▁▁▁ 9 4.12 ipc ( 1.05% stalls) 98.88% L1 data cache 10 856.49 cycles 3.53k instructions 28.65% retired LD/ST ( 1.01k)
For those who love doing micro-benchmarks, mitata can automatically detect and inform you about optimization passes like dead code elimination without requiring any special engine flags.
1-------------------------------------- ------------------------------- 21 + 1 318.63 ps/iter 325.37 ps ▇ █ ! 3 (267.92 ps … 14.28 ns) 382.81 ps ▁▁▁▁▁▁▁█▁▁█▁▁▁▁▁▁▁▁▁▁ 4empty function 319.36 ps/iter 325.37 ps █ ▅ ! 5 (248.62 ps … 46.61 ns) 382.81 ps ▁▁▁▁▁▁▃▁▁█▁█▇▁▁▁▁▁▁▁▁ 6 7! = benchmark was likely optimized out (dead code elimination)
With mitata’s ascii rendering capabilities, now you can easily visualize samples in barplots, boxplots, lineplots, histograms, and get clear summaries without any additional tools or dependencies.
1import { summary, barplot, boxplot, lineplot } from 'mitata'; 2 3// wrap bench() calls in visualization scope 4barplot(() => { 5 bench(...) 6}); 7 8 ┌ ┐ 9 1 + 1 ┤■ 318.11 ps 10 Date.now() ┤■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 27.69 ns 11 └ ┘ 12 13// scopes can be async 14await boxplot(async () => { 15 // ... 16}); 17 18 ┌ ┐ 19 ╷┌─┬─┐ ╷ 20 Bubble Sort ├┤ │ ├───────────────────────┤ 21 ╵└─┴─┘ ╵ 22 ┬ ╷ 23 Quick Sort │───┤ 24 ┴ ╵ 25 ┬ 26 Native Sort │ 27 ┴ 28 └ ┘ 29 90.88 µs 2.43 ms 4.77 ms 30 31// can combine multiple visualizations 32lineplot(() => { 33 summary(() => { 34 // ... 35 }); 36 37 // bench() calls here wont be part of summary 38}); 39 40summary 41 new Array($len) 42 5.42…8.33x faster than Array.from($len) 43 44 ┌ ┐ 45 Array.from($size) ⢠⠊ 46 new Array($size) ⢀⠔⠁ 47 ⡠⠃ 48 ⢀⠎ 49 ⡔⠁ 50 ⡠⠊ 51 ⢀⠜ 52 ⡠⠃ 53 ⡔⠁ 54 ⢀⠎ 55 ⡠⠃ 56 ⢀⠜ 57 ⢠⠊ ⣀⣀⠤⠤⠒ 58 ⡰⠁ ⣀⡠⠤⠔⠒⠊⠉ 59 ⣀⣀⣀⠤⠜ ⣀⡠⠤⠒⠊⠉ 60 ⣤⣤⣤⣤⣤⣤⣤⣤⣤⣤⣤⣤⣔⣒⣒⣊⣉⠭⠤⠤⠤⠤⠤⠒⠊⠉ 61 └ ┘
In case you don’t need all the fluff that comes with mitata or just need raw results, mitata exports its fundamental building blocks to allow you to easily build your own tooling and wrappers without losing any core benefits of using mitata.
1#include "src/mitata.hpp" 2 3int main() { 4 auto stats = mitata::lib::fn([]() { /***/ }) 5}
1import { B, measure } from 'mitata'; 2 3// lowest level for power users 4const stats = await measure(function* (state) { 5 const size = state.get('x'); 6 7 yield { 8 [0]() { 9 return size; 10 }, 11 12 bench(size) { 13 return new Array(size); 14 }, 15 }; 16}, { 17 args: { x: 1 }, 18 batch_samples: 5 * 1024, 19 min_cpu_time: 1000 * 1e6, 20}); 21 22// explore how magic happens 23console.log(stats.debug) // -> jit optimized source code of benchmark 24 25// higher level api that includes mitata's argument and range features 26const b = new B('new Array($x)', function* (state) { 27 const size = state.get('x'); 28 yield () => new Array(size); 29}).args('x', [1, 5, 10]); 30 31const trial = await b.run();
By leveraging the power of javascript JIT compilation, mitata is able to generate zero-overhead measurement loops that provide picoseconds precision in timing measurements. These loops are so precise that they can even be reused to provide additional features like CPU clock frequency estimation and dead code elimination detection, all while staying inside javascript vm sandbox.
With computed parameters and garbage collection tuning, you can tap into mitata's code generation capabilities to further refine the accuracy of your benchmarks. Using computed parameters ensures that parameters computation is moved outside the benchmark, thereby preventing the javascript JIT from performing loop invariant code motion optimization.
1// node --expose-gc --allow-natives-syntax tools/compare.mjs 2clk: ~2.71 GHz 3cpu: Apple M2 Pro 4runtime: node 23.3.0 (arm64-darwin) 5 6benchmark avg (min … max) p75 p99 (min … top 1%) 7------------------------------------------- ------------------------------- 8a / b 4.59 ns/iter 4.44 ns █ 9 (4.33 ns … 25.86 ns) 6.91 ns ██▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ 10 6.70 ipc ( 2.17% stalls) NaN% L1 data cache 11 16.80 cycles 112.52 instructions 0.00% retired LD/ST ( 0.00) 12 13a / b (computed) 4.23 ns/iter 4.10 ns ▇█ 14 (3.88 ns … 30.03 ns) 7.26 ns ██▅▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ 15 6.40 ipc ( 2.10% stalls) NaN% L1 data cache 16 15.70 cycles 100.53 instructions 0.00% retired LD/ST ( 0.00) 174.59 ns/iter - https://npmjs.com/mitata 18 19// vs other libraries 20 21a / b x 90,954,882 ops/sec ±2.13% (92 runs sampled) 2210.99 ns/iter - https://npmjs.com/benchmark 23 24┌─────────┬───────────┬──────────────────────┬─────────────────────┬────────────────────────────┬───────────────────────────┬──────────┐ 25│ (index) │ Task name │ Latency average (ns) │ Latency median (ns) │ Throughput average (ops/s) │ Throughput median (ops/s) │ Samples │ 26├─────────┼───────────┼──────────────────────┼─────────────────────┼────────────────────────────┼───────────────────────────┼──────────┤ 27│ 0 │ 'a / b' │ '27.71 ± 0.09%' │ '41.00' │ '28239766 ± 0.01%' │ '24390243' │ 36092096 │ 28└─────────┴───────────┴──────────────────────┴─────────────────────┴────────────────────────────┴───────────────────────────┴──────────┘ 2927.71 ns/iter - vitest bench / https://npmjs.com/tinybench 30 31a / b x 86,937,932 ops/sec (11 runs sampled) v8-never-optimize=true min..max=(11.32ns...11.62ns) 3211.51 ns/iter - https://npmjs.com/bench-node 33 34╔══════════════╤═════════╤════════════════════╤═══════════╗ 35║ Slower tests │ Samples │ Result │ Tolerance ║ 36╟──────────────┼─────────┼────────────────────┼───────────╢ 37║ Fastest test │ Samples │ Result │ Tolerance ║ 38╟──────────────┼─────────┼────────────────────┼───────────╢ 39║ a / b │ 10000 │ 14449822.99 op/sec │ ± 4.04 % ║ 40╚══════════════╧═════════╧════════════════════╧═══════════╝ 4169.20 ns/iter - https://npmjs.com/cronometro
1// node --expose-gc --allow-natives-syntax --jitless tools/compare.mjs 2clk: ~0.06 GHz 3cpu: Apple M2 Pro 4runtime: node 23.3.0 (arm64-darwin) 5 6benchmark avg (min … max) p75 p99 (min … top 1%) 7------------------------------------------- ------------------------------- 8a / b 74.52 ns/iter 75.53 ns █ 9 (71.96 ns … 104.94 ns) 92.01 ns █▅▇▅▅▃▃▂▁▁▁▁▁▁▁▁▁▁▁▁▁ 10 5.78 ipc ( 0.51% stalls) NaN% L1 data cache 11 261.51 cycles 1.51k instructions 0.00% retired LD/ST ( 0.00) 12 13a / b (computed) 56.05 ns/iter 57.20 ns █ 14 (53.62 ns … 84.69 ns) 73.21 ns █▅▆▅▅▃▃▂▂▁▁▁▁▁▁▁▁▁▁▁▁ 15 5.65 ipc ( 0.59% stalls) NaN% L1 data cache 16 197.74 cycles 1.12k instructions 0.00% retired LD/ST ( 0.00) 1774.52 ns/iter - https://npmjs.com/mitata 18 19// vs other libraries 20 21a / b x 11,232,032 ops/sec ±0.50% (99 runs sampled) 2289.03 ns/iter - https://npmjs.com/benchmark 23 24┌─────────┬───────────┬──────────────────────┬─────────────────────┬────────────────────────────┬───────────────────────────┬─────────┐ 25│ (index) │ Task name │ Latency average (ns) │ Latency median (ns) │ Throughput average (ops/s) │ Throughput median (ops/s) │ Samples │ 26├─────────┼───────────┼──────────────────────┼─────────────────────┼────────────────────────────┼───────────────────────────┼─────────┤ 27│ 0 │ 'a / b' │ '215.53 ± 0.08%' │ '208.00' │ '4786095 ± 0.01%' │ '4807692' │ 4639738 │ 28└─────────┴───────────┴──────────────────────┴─────────────────────┴────────────────────────────┴───────────────────────────┴─────────┘ 29215.53 ns/iter - vitest bench / https://npmjs.com/tinybench 30 31a / b x 10,311,999 ops/sec (11 runs sampled) v8-never-optimize=true min..max=(95.66ns...97.51ns) 3296.86 ns/iter - https://npmjs.com/bench-node 33 34╔══════════════╤═════════╤═══════════════════╤═══════════╗ 35║ Slower tests │ Samples │ Result │ Tolerance ║ 36╟──────────────┼─────────┼───────────────────┼───────────╢ 37║ Fastest test │ Samples │ Result │ Tolerance ║ 38╟──────────────┼─────────┼───────────────────┼───────────╢ 39║ a / b │ 2000 │ 4664908.00 op/sec │ ± 0.94 % ║ 40╚══════════════╧═════════╧═══════════════════╧═══════════╝ 41214.37 ns/iter - https://npmjs.com/cronometro
Creating accurate and meaningful benchmarks requires careful attention to how modern JavaScript engines optimize code. This covers essential concepts and best practices to ensure your benchmarks measure actual performance characteristics rather than optimization artifacts.
JIT can detect and eliminate code that has no observable effects. To ensure your benchmark code executes as intended, you must create observable side effects.
1import { do_not_optimize } from 'mitata'; 2 3bench(function* () { 4 // ❌ Bad: jit can see that function has zero side-effects 5 yield () => new Array(0); 6 // will get optimized to: 7 /* 8 yield () => {}; 9 */ 10 11 // ✅ Good: do_not_optimize(value) emits code that causes side-effects 12 yield () => do_not_optimize(new Array(0)); 13});
For benchmarks involving significant memory allocations, controlling garbage collection frequency can improve results consistency.
1// ❌ Bad: unpredictable gc pauses 2bench(() => { 3 const bigArray = new Array(1000000); 4}); 5 6// ✅ Good: gc before each (batch-)iteration 7bench(() => { 8 const bigArray = new Array(1000000); 9}).gc('inner'); // run gc before each iteration
JavaScript engines can optimize away repeated computations by hoisting them out of loops or caching results. Use computed parameters to prevent loop invariant code motion optimization.
1bench(function* (ctx) { 2 const str = 'abc'; 3 4 // ❌ Bad: JIT sees that both str and 'c' search value are constants/comptime-known 5 yield () => str.includes('c'); 6 // will get optimized to: 7 /* 8 yield () => true; 9 */ 10 11 // ❌ Bad: JIT sees that computation doesn't depend on anything inside loop 12 const substr = ctx.get('substr'); 13 yield () => str.includes(substr); 14 // will get optimized to: 15 /* 16 const $0 = str.includes(substr); 17 yield () => $0; 18 */ 19 20 // ✅ Good: using computed parameters prevents jit from performing any loop optimizations 21 yield { 22 [0]() { 23 return str; 24 }, 25 26 [1]() { 27 return substr; 28 }, 29 30 bench(str, substr) { 31 return do_not_optimize(str.includes(substr)); 32 }, 33 }; 34}).args('substr', ['c']);
MIT © evanwashere
No vulnerabilities found.
No security vulnerabilities found.