Gathering detailed insights and metrics for @pawel-up/benchmark
Gathering detailed insights and metrics for @pawel-up/benchmark
Gathering detailed insights and metrics for @pawel-up/benchmark
Gathering detailed insights and metrics for @pawel-up/benchmark
npm install @pawel-up/benchmark
Typescript
Module System
Node Version
NPM Version
76.7
Supply Chain
99.1
Quality
84.9
Maintenance
100
Vulnerability
82
License
TypeScript (98.83%)
JavaScript (1.17%)
Total Downloads
700
Last Day
26
Last Week
164
Last Month
546
Last Year
700
NOASSERTION License
25 Commits
1 Branches
1 Contributors
Updated on May 18, 2025
Minified
Minified + Gzipped
Latest Version
1.0.5
Package Id
@pawel-up/benchmark@1.0.5
Unpacked Size
543.97 kB
Size
115.76 kB
File Count
90
NPM Version
10.8.2
Node Version
20.19.1
Published on
May 18, 2025
Cumulative downloads
Total Downloads
3
Tired of slow, inaccurate, or overly complex benchmarking tools? @pawel-up/benchmark
is a modern, lightweight, and highly accurate benchmarking library designed for JavaScript and TypeScript. It provides everything you need to measure the performance of your code with confidence.
Why Choose @pawel-up/benchmark
?
@pawel-up/benchmark
uses advanced techniques like warm-up iterations, adaptive inner iterations, and outlier removal to ensure highly accurate and reliable results.This library is designed to be a lean and powerful core for benchmarking. Integrations for CLI, file output, and other features are intended to be built on top of this core.
ops
- Operations per second.rme
- Relative Margin of Error (RME).me
- Margin of error.stddev
- Sample standard deviation.mean
- Sample arithmetic mean.sample
- The sample of execution of times.sem
- The standard error of the mean.variance
- The sample variance.size
- Sample size.cohensd
- Cohen's d effect size.sed
- The standard error of the difference in means.dmedian
- The difference between the sample medians of the two benchmark runs.pmedian
- The percentage difference between the sample medians of the two benchmark runs.Installation:
1npm install @pawel-up/benchmark 2# or 3yarn add @pawel-up/benchmark
Basic Usage (Single Benchmark):
1import { Benchmarker } from '@pawel-up/benchmark'; 2 3// Your function to benchmark 4async function myAsyncFunction() { 5 // ... your code ... 6 await new Promise(resolve => setTimeout(resolve, 10)); 7} 8 9async function main() { 10 const benchmarker = new Benchmarker('My Async Benchmark', myAsyncFunction, { 11 maxIterations: 100, 12 maxExecutionTime: 5000, 13 }); 14 await benchmarker.run(); 15 const report = benchmarker.getReport(); 16 console.log(report); 17} 18 19main(); 20// Note: This example uses `console.log` for demonstration purposes. The core library does not include any built-in reporters.
Using Benchmark Suites:
1import { Suite } from '@pawel-up/benchmark'; 2 3// Your functions to benchmark 4function myFunction1() { 5 // ... your code ... 6} 7 8function myFunction2() { 9 // ... your code ... 10} 11 12async function main() { 13 const suite = new Suite('My Benchmark Suite', { maxExecutionTime: 10000 }); 14 suite.setSetup(async () => { 15 console.log('Running setup function...'); 16 // Do some setup work here... 17 await new Promise(resolve => setTimeout(resolve, 1000)); // Example async setup 18 console.log('Setup function completed.'); 19 }); 20 suite.setup(); 21 suite.add('Function 1', myFunction1); 22 suite.setup(); 23 suite.add('Function 2', myFunction2); 24 25 await suite.run(); 26 const report = suite.getReport(); 27 console.log(report); 28} 29 30main(); 31// Note: This example uses `console.log` for demonstration purposes. The core library does not include any built-in reporters.
Using compareFunction:
1import { compareFunction, SuiteReport } from '@pawel-up/benchmark'; 2import * as fs from 'fs/promises'; 3 4async function main() { 5 // Load suite reports from files (example) 6 const suiteReport1 = JSON.parse(await fs.readFile('suite_report_1.json', 'utf-8')) as SuiteReport; 7 const suiteReport2 = JSON.parse(await fs.readFile('suite_report_2.json', 'utf-8')) as SuiteReport; 8 const suiteReport3 = JSON.parse(await fs.readFile('suite_report_3.json', 'utf-8')) as SuiteReport; 9 const suiteReport4 = JSON.parse(await fs.readFile('suite_report_4.json', 'utf-8')) as SuiteReport; 10 11 const suiteReports = [suiteReport1, suiteReport2, suiteReport3, suiteReport4]; 12 13 // Example 1: Compare with JSON output 14 compareFunction('myFunction', suiteReports, { format: 'json' }); 15 16 // Example 2: Compare with CSV output 17 compareFunction('myFunction', suiteReports, { format: 'csv' }); 18 19 // Example 3: Compare with default table output 20 compareFunction('myFunction', suiteReports); 21} 22 23main().catch(console.error); 24// Note: This example uses `console.log` for demonstration purposes. The core library does not include any built-in reporters.
@pawel-up/benchmark
goes beyond simple timing measurements. It leverages statistical methods to provide a more accurate and meaningful assessment of function performance. Here's why this approach is crucial:
By using a statistical approach, @pawel-up/benchmark
helps you make data-driven decisions about your code's performance, leading to more effective optimizations and a better understanding of your library's behavior.
Benchmarker
Classnew Benchmarker(name: string, fn: () => unknown | Promise<unknown>, options?: BenchmarkOptions)
Benchmarker
instance.name
: The name of the benchmark.fn
: The function to benchmark (can be synchronous or asynchronous).options
: An optional BenchmarkOptions
object to configure the benchmark.async run(): Promise<void>
getReport(): BenchmarkReport
BenchmarkReport
object with the benchmark results.Suite
Classnew Suite(name: string, options?: BenchmarkOptions)
Suite
instance.name
: The name of the suite.options
: An optional BenchmarkOptions
object to configure the suite.add(name: string, fn: () => unknown | Promise<unknown>): this
name
: The name of the benchmark.fn
: The function to benchmark.addReporter(reporter: Reporter, timing: ReporterExecutionTiming): this
reporter
: The reporter instance.timing
: When the reporter should be executed ('after-each'
or 'after-all'
).setSetup(fn: () => unknown | Promise<unknown>): this
fn
: The setup function.setup(): this
async run(): Promise<SuiteReport>
getReport(): SuiteReport
SuiteReport
object with the suite results.Reporter
Classasync run(report: BenchmarkReport | SuiteReport): Promise<void>
BenchmarkOptions
InterfacemaxExecutionTime?: number
warmupIterations?: number
innerIterations?: number
maxInnerIterations?: number
timeThreshold?: number
minsize?: number
maxIterations?: number
debug?: boolean
logLevel?: number
BenchmarkReport
Interfacekind: 'benchmark'
name: string
ops: number
- Operations per Secondrme: number
- Relative Margin of Error (RME)stddev: number
- Sample Standard Deviationmean: number
- Sample Arithmetic Meanme: number
- Margin of errorsample: number[]
- The sample of execution of timessem: number
- The standard error of the mean.variance: number
- The sample variancesize: number
- Sample sizedate: string
SuiteReport
Interfacekind: 'suite'
name: string
date: string
results: BenchmarkReport[]
Contributions are welcome! Please see the contributing guidelines for more information.
This project is licensed under the MIT License.
No vulnerabilities found.
No security vulnerabilities found.
Last Day
116.7%
26
Compared to previous day
Last Week
-11.4%
164
Compared to previous week
Last Month
254.5%
546
Compared to previous month
Last Year
0%
700
Compared to previous year