Gathering detailed insights and metrics for prom-client
Gathering detailed insights and metrics for prom-client
Gathering detailed insights and metrics for prom-client
Gathering detailed insights and metrics for prom-client
opentelemetry-node-metrics
An adoption of the node process metrics of prom-client
@envelop/prometheus
This plugin tracks the complete execution flow, and reports metrics using Prometheus tracing (based on `prom-client`).
@tinkoff/metrics-noop
noop realisation for prom-client
request-plus
advanced promisified http client with retries, EventEmitter, cache-manager, prom-client and logging
npm install prom-client
Module System
Min. Node Version
Typescript Support
Node Version
NPM Version
3,155 Stars
524 Commits
378 Forks
31 Watching
4 Branches
118 Contributors
Updated on 27 Nov 2024
Minified
Minified + Gzipped
JavaScript (100%)
Cumulative downloads
Total Downloads
Last day
1.7%
538,925
Compared to previous day
Last week
0.8%
2,801,537
Compared to previous week
Last month
12.5%
11,311,475
Compared to previous month
Last year
30.2%
101,847,825
Compared to previous year
A prometheus client for Node.js that supports histogram, summaries, gauges and counters.
See example folder for a sample usage. The library does not bundle any web
framework. To expose the metrics, respond to Prometheus's scrape requests with
the result of await registry.metrics()
.
cluster
moduleNode.js's cluster
module spawns multiple processes and hands off socket
connections to those workers. Returning metrics from a worker's local registry
will only reveal that individual worker's metrics, which is generally
undesirable. To solve this, you can aggregate all of the workers' metrics in the
master process. See example/cluster.js
for an example.
Default metrics use sensible aggregation methods. (Note, however, that the event
loop lag mean and percentiles are averaged, which is not perfectly accurate.)
Custom metrics are summed across workers by default. To use a different
aggregation method, set the aggregator
property in the metric config to one of
'sum', 'first', 'min', 'max', 'average' or 'omit'. (See lib/metrics/version.js
for an example.)
If you need to expose metrics about an individual worker, you can include a
value that is unique to the worker (such as the worker ID or process ID) in a
label. (See example/server.js
for an example using
worker_${cluster.worker.id}
as a label value.)
Metrics are aggregated from the global registry by default. To use a different
registry, call
client.AggregatorRegistry.setRegistries(registryOrArrayOfRegistries)
from the
worker processes.
There are some default metrics recommended by Prometheus
itself.
To collect these, call collectDefaultMetrics
. In addition, some
Node.js-specific metrics are included, such as event loop lag, active handles,
GC and Node.js version. See lib/metrics for a list of all
metrics.
NOTE: Some of the metrics, concerning File Descriptors and Memory, are only available on Linux.
collectDefaultMetrics
optionally accepts a config object with following entries:
prefix
an optional prefix for metric names. Default: no prefix.register
to which registry the metrics should be registered. Default: the global default registry.gcDurationBuckets
with custom buckets for GC duration histogram. Default buckets of GC duration histogram are [0.001, 0.01, 0.1, 1, 2, 5]
(in seconds).eventLoopMonitoringPrecision
with sampling rate in milliseconds. Must be greater than zero. Default: 10.To register metrics to another registry, pass it in as register
:
1const client = require('prom-client'); 2const collectDefaultMetrics = client.collectDefaultMetrics; 3const Registry = client.Registry; 4const register = new Registry(); 5collectDefaultMetrics({ register });
To use custom buckets for GC duration histogram, pass it in as gcDurationBuckets
:
1const client = require('prom-client'); 2const collectDefaultMetrics = client.collectDefaultMetrics; 3collectDefaultMetrics({ gcDurationBuckets: [0.1, 0.2, 0.3] });
To prefix metric names with your own arbitrary string, pass in a prefix
:
1const client = require('prom-client'); 2const collectDefaultMetrics = client.collectDefaultMetrics; 3const prefix = 'my_application_'; 4collectDefaultMetrics({ prefix });
To apply generic labels to all default metrics, pass an object to the labels
property (useful if you're working in a clustered environment):
1const client = require('prom-client'); 2const collectDefaultMetrics = client.collectDefaultMetrics; 3collectDefaultMetrics({ 4 labels: { NODE_APP_INSTANCE: process.env.NODE_APP_INSTANCE }, 5});
You can get the full list of metrics by inspecting
client.collectDefaultMetrics.metricsList
.
Default metrics are collected on scrape of metrics endpoint, not on an interval.
1const client = require('prom-client'); 2 3const collectDefaultMetrics = client.collectDefaultMetrics; 4 5collectDefaultMetrics();
All metric types have two mandatory parameters: name
and help
. Refer to
https://prometheus.io/docs/practices/naming/ for guidance on naming metrics.
For metrics based on point-in-time observations (e.g. current memory usage, as
opposed to HTTP request durations observed continuously in a histogram), you
should provide a collect()
function, which will be invoked when Prometheus
scrapes your metrics endpoint. collect()
can either be synchronous or return a
promise. See Gauge below for an example. (Note that you should not update
metric values in a setInterval
callback; do so in this collect
function
instead.)
See Labels for information on how to configure labels for all metric types.
Counters go up, and reset when the process restarts.
1const client = require('prom-client'); 2const counter = new client.Counter({ 3 name: 'metric_name', 4 help: 'metric_help', 5}); 6counter.inc(); // Increment by 1 7counter.inc(10); // Increment by 10
Gauges are similar to Counters but a Gauge's value can be decreased.
1const client = require('prom-client'); 2const gauge = new client.Gauge({ name: 'metric_name', help: 'metric_help' }); 3gauge.set(10); // Set to 10 4gauge.inc(); // Increment 1 5gauge.inc(10); // Increment 10 6gauge.dec(); // Decrement by 1 7gauge.dec(10); // Decrement by 10
If the gauge is used for a point-in-time observation, you should provide a
collect
function:
1const client = require('prom-client'); 2new client.Gauge({ 3 name: 'metric_name', 4 help: 'metric_help', 5 collect() { 6 // Invoked when the registry collects its metrics' values. 7 // This can be synchronous or it can return a promise/be an async function. 8 this.set(/* the current value */); 9 }, 10});
1// Async version: 2const client = require('prom-client'); 3new client.Gauge({ 4 name: 'metric_name', 5 help: 'metric_help', 6 async collect() { 7 // Invoked when the registry collects its metrics' values. 8 const currentValue = await somethingAsync(); 9 this.set(currentValue); 10 }, 11});
Note that you should not use arrow functions for collect
because arrow
functions will not have the correct value for this
.
1// Set value to current time in seconds: 2gauge.setToCurrentTime(); 3 4// Record durations: 5const end = gauge.startTimer(); 6http.get('url', res => { 7 end(); 8});
Histograms track sizes and frequency of events.
The defaults buckets are intended to cover usual web/RPC requests, but they can be overridden. (See also Bucket Generators.)
1const client = require('prom-client'); 2new client.Histogram({ 3 name: 'metric_name', 4 help: 'metric_help', 5 buckets: [0.1, 5, 15, 50, 100, 500], 6});
1const client = require('prom-client'); 2const histogram = new client.Histogram({ 3 name: 'metric_name', 4 help: 'metric_help', 5}); 6histogram.observe(10); // Observe value in histogram
1const end = histogram.startTimer();
2xhrRequest(function (err, res) {
3 const seconds = end(); // Observes and returns the value to xhrRequests duration in seconds
4});
Summaries calculate percentiles of observed values.
The default percentiles are: 0.01, 0.05, 0.5, 0.9, 0.95, 0.99, 0.999. But they
can be overridden by specifying a percentiles
array. (See also
Bucket Generators.)
1const client = require('prom-client'); 2new client.Summary({ 3 name: 'metric_name', 4 help: 'metric_help', 5 percentiles: [0.01, 0.1, 0.9, 0.99], 6});
To enable the sliding window functionality for summaries you need to add
maxAgeSeconds
and ageBuckets
to the config like this:
1const client = require('prom-client');
2new client.Summary({
3 name: 'metric_name',
4 help: 'metric_help',
5 maxAgeSeconds: 600,
6 ageBuckets: 5,
7 pruneAgedBuckets: false,
8});
The maxAgeSeconds
will tell how old a bucket can be before it is reset and
ageBuckets
configures how many buckets we will have in our sliding window for
the summary. If pruneAgedBuckets
is false
(default), the metric value will
always be present, even when empty (its percentile values will be 0
). Set
pruneAgedBuckets
to true
if you don't want to export it when it is empty.
1const client = require('prom-client'); 2const summary = new client.Summary({ 3 name: 'metric_name', 4 help: 'metric_help', 5}); 6summary.observe(10);
1const end = summary.startTimer();
2xhrRequest(function (err, res) {
3 end(); // Observes the value to xhrRequests duration in seconds
4});
All metrics can take a labelNames
property in the configuration object. All
label names that the metric support needs to be declared here. There are two
ways to add values to the labels:
1const client = require('prom-client'); 2const gauge = new client.Gauge({ 3 name: 'metric_name', 4 help: 'metric_help', 5 labelNames: ['method', 'statusCode'], 6}); 7 8// 1st version: Set value to 100 with "method" set to "GET" and "statusCode" to "200" 9gauge.set({ method: 'GET', statusCode: '200' }, 100); 10// 2nd version: Same effect as above 11gauge.labels({ method: 'GET', statusCode: '200' }).set(100); 12// 3rd version: And again the same effect as above 13gauge.labels('GET', '200').set(100);
It is also possible to use timers with labels, both before and after the timer is created:
1const end = startTimer({ method: 'GET' }); // Set method to GET, we don't know statusCode yet
2xhrRequest(function (err, res) {
3 if (err) {
4 end({ statusCode: '500' }); // Sets value to xhrRequest duration in seconds with statusCode 500
5 } else {
6 end({ statusCode: '200' }); // Sets value to xhrRequest duration in seconds with statusCode 200
7 }
8});
Metrics with labels can not be exported before they have been observed at least once since the possible label values are not known before they're observed.
For histograms, this can be solved by explicitly zeroing all expected label values:
1const histogram = new client.Histogram({ 2 name: 'metric_name', 3 help: 'metric_help', 4 buckets: [0.1, 5, 15, 50, 100, 500], 5 labels: ['method'], 6}); 7histogram.zero({ method: 'GET' }); 8histogram.zero({ method: 'POST' });
Typescript can also enforce label names using as const
1import * as client from 'prom-client'; 2 3const counter = new client.Counter({ 4 name: 'metric_name', 5 help: 'metric_help', 6 // add `as const` here to enforce label names 7 labelNames: ['method'] as const, 8}); 9 10// Ok 11counter.inc({ method: 1 }); 12 13// this is an error since `'methods'` is not a valid `labelName` 14// @ts-expect-error 15counter.inc({ methods: 1 });
Static labels may be applied to every metric emitted by a registry:
1const client = require('prom-client'); 2const defaultLabels = { serviceName: 'api-v1' }; 3client.register.setDefaultLabels(defaultLabels);
This will output metrics in the following way:
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes{serviceName="api-v1"} 33853440 1498510040309
Default labels will be overridden if there is a name conflict.
register.clear()
will clear default labels.
The exemplars defined in the OpenMetrics specification can be enabled on Counter
and Histogram metric types. The default metrics have support for OpenTelemetry,
they will populate the exemplars with the labels {traceId, spanId}
and their
corresponding values.
The format for inc()
and observe()
calls are different if exemplars are
enabled. They get a single object with the format
{labels, value, exemplarLabels}
.
When using exemplars, the registry used for metrics should be set to OpenMetrics type (including the global or default registry if no registries are specified).
The library supports both the old Prometheus format and the OpenMetrics format. The format can be set per registry. For default metrics:
1const Prometheus = require('prom-client'); 2Prometheus.register.setContentType( 3 Prometheus.Registry.OPENMETRICS_CONTENT_TYPE, 4);
Currently available registry types are defined by the content types:
PROMETHEUS_CONTENT_TYPE - version 0.0.4 of the original Prometheus metrics, this is currently the default registry type.
OPENMETRICS_CONTENT_TYPE - defaults to version 1.0.0 of the OpenMetrics standard.
The HTTP Content-Type string for each registry type is exposed both at module
level (prometheusContentType
and openMetricsContentType
) and as static
properties on the Registry
object.
The contentType
constant exposed by the module returns the default content
type when creating a new registry, currently defaults to Prometheus type.
By default, metrics are automatically registered to the global registry (located
at require('prom-client').register
). You can prevent this by specifying
registers: []
in the metric constructor configuration.
Using non-global registries requires creating a Registry instance and passing it
inside registers
in the metric configuration object. Alternatively you can
pass an empty registers
array and register it manually.
Registry has a merge
function that enables you to expose multiple registries
on the same endpoint. If the same metric name exists in both registries, an
error will be thrown.
Merging registries of different types is undefined. The user needs to make sure all used registries have the same type (Prometheus or OpenMetrics versions).
1const client = require('prom-client');
2const registry = new client.Registry();
3const counter = new client.Counter({
4 name: 'metric_name',
5 help: 'metric_help',
6 registers: [registry], // specify a non-default registry
7});
8const histogram = new client.Histogram({
9 name: 'metric_name',
10 help: 'metric_help',
11 registers: [], // don't automatically register this metric
12});
13registry.registerMetric(histogram); // register metric manually
14counter.inc();
15
16const mergedRegistries = client.Registry.merge([registry, client.register]);
If you want to use multiple or non-default registries with the Node.js cluster
module, you will need to set the registry/registries to aggregate from:
1const AggregatorRegistry = client.AggregatorRegistry;
2AggregatorRegistry.setRegistries(registry);
3// or for multiple registries:
4AggregatorRegistry.setRegistries([registry1, registry2]);
You can get all metrics by running await register.metrics()
, which will return
a string in the Prometheus exposition format.
If you need to output a single metric in the Prometheus exposition format, you
can use await register.getSingleMetricAsString(*name of metric*)
, which will
return a string for Prometheus to consume.
If you need to get a reference to a previously registered metric, you can use
register.getSingleMetric(*name of metric*)
.
You can remove all metrics by calling register.clear()
. You can also remove a
single metric by calling register.removeSingleMetric(*name of metric*)
.
If you need to reset all metrics, you can use register.resetMetrics()
. The
metrics will remain present in the register and can be used without the need to
instantiate them again, like you would need to do after register.clear()
.
You can get aggregated metrics for all workers in a Node.js cluster with
await register.clusterMetrics()
. This method returns a promise that resolves
with a metrics string suitable for Prometheus to consume.
1const metrics = await register.clusterMetrics(); 2 3// - or - 4 5register 6 .clusterMetrics() 7 .then(metrics => { 8 /* ... */ 9 }) 10 .catch(err => { 11 /* ... */ 12 });
It is possible to push metrics via a Pushgateway.
1const client = require('prom-client'); 2let gateway = new client.Pushgateway('http://127.0.0.1:9091'); 3 4gateway.pushAdd({ jobName: 'test' }) 5 .then(({resp, body}) => { 6 /* ... */ 7 }) 8 .catch(err => { 9 /* ... */ 10 })); //Add metric and overwrite old ones 11gateway.push({ jobName: 'test' }) 12 .then(({resp, body}) => { 13 /* ... */ 14 }) 15 .catch(err => { 16 /* ... */ 17 })); //Overwrite all metrics (use PUT) 18gateway.delete({ jobName: 'test' }) 19 .then(({resp, body}) => { 20 /* ... */ 21 }) 22 .catch(err => { 23 /* ... */ 24 })); //Delete all metrics for jobName 25 26//All gateway requests can have groupings on it 27gateway.pushAdd({ jobName: 'test', groupings: { key: 'value' } }) 28 .then(({resp, body}) => { 29 /* ... */ 30 }) 31 .catch(err => { 32 /* ... */ 33 })); 34 35// It's possible to extend the Pushgateway with request options from nodes core 36// http/https library. In particular, you might want to provide an agent so that 37// TCP connections are reused. 38gateway = new client.Pushgateway('http://127.0.0.1:9091', { 39 timeout: 5000, //Set the request timeout to 5000ms 40 agent: new http.Agent({ 41 keepAlive: true, 42 keepAliveMsec: 10000, 43 maxSockets: 5, 44 }), 45});
Some gateways such as Gravel Gateway do not support grouping by job name, exposing a plain /metrics
endpoint instead of /metrics/job/<jobName>
. It's possible to configure a gateway instance to not require a jobName in the options argument.
1gravelGateway = new client.Pushgateway('http://127.0.0.1:9091', {
2 timeout: 5000,
3 requireJobName: false,
4});
5gravelGateway.pushAdd();
For convenience, there are two bucket generator functions - linear and exponential.
1const client = require('prom-client');
2new client.Histogram({
3 name: 'metric_name',
4 help: 'metric_help',
5 buckets: client.linearBuckets(0, 10, 20), //Create 20 buckets, starting on 0 and a width of 10
6});
7
8new client.Histogram({
9 name: 'metric_name',
10 help: 'metric_help',
11 buckets: client.exponentialBuckets(1, 2, 5), //Create 5 buckets, starting on 1 and with a factor of 2
12});
To avoid native dependencies in this module, GC statistics for bytes reclaimed in each GC sweep are kept in a separate module: https://github.com/SimenB/node-prometheus-gc-stats. (Note that that metric may no longer be accurate now that v8 uses parallel garbage collection.)
No vulnerabilities found.
Reason
no dangerous workflow patterns detected
Reason
no binaries found in the repo
Reason
license file detected
Details
Reason
0 existing vulnerabilities detected
Reason
Found 10/28 approved changesets -- score normalized to 3
Reason
0 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 0
Reason
detected GitHub workflow tokens with excessive permissions
Details
Reason
dependency not pinned by hash detected -- score normalized to 0
Details
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
security policy file not detected
Details
Reason
project is not fuzzed
Details
Reason
branch protection not enabled on development/release branches
Details
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
Score
Last Scanned on 2024-11-18
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More