Gathering detailed insights and metrics for web-vitals
Gathering detailed insights and metrics for web-vitals
Gathering detailed insights and metrics for web-vitals
Gathering detailed insights and metrics for web-vitals
@vercel/gatsby-plugin-vercel-analytics
Track Core Web Vitals in Gatsby projects with Vercel Speed Insights.
cypress-web-vitals
A Web Vitals command for cypress
@nuxtjs/web-vitals
Web Vitals for Nuxt.js
@reshepe-web-vitals/nuxt
reshepe web vitals - monitor core web vitals on your website
npm install web-vitals
99.5
Supply Chain
99.6
Quality
91.2
Maintenance
100
Vulnerability
100
License
Module System
Min. Node Version
Typescript Support
Node Version
NPM Version
7,630 Stars
441 Commits
418 Forks
117 Watching
12 Branches
70 Contributors
Updated on 28 Nov 2024
JavaScript (50.75%)
TypeScript (41.87%)
Nunjucks (7.28%)
Shell (0.09%)
CSS (0.01%)
Cumulative downloads
Total Downloads
Last day
-6.9%
774,892
Compared to previous day
Last week
0.7%
4,339,475
Compared to previous week
Last month
5.4%
18,673,671
Compared to previous month
Last year
-26.8%
187,220,145
Compared to previous year
26
web-vitals
The web-vitals
library is a tiny (~2K, brotli'd), modular library for measuring all the Web Vitals metrics on real users, in a way that accurately matches how they're measured by Chrome and reported to other Google tools (e.g. Chrome User Experience Report, Page Speed Insights, Search Console's Speed Report).
The library supports all of the Core Web Vitals as well as a number of other metrics that are useful in diagnosing real-user performance issues.
[!CAUTION] FID is deprecated and will be removed in the next major release.
The web-vitals
library uses the buffered
flag for PerformanceObserver, allowing it to access performance entries that occurred before the library was loaded.
This means you do not need to load this library early in order to get accurate performance data. In general, this library should be deferred until after other user-impacting code has loaded.
You can install this library from npm by running:
1npm install web-vitals
[!NOTE] If you're not using npm, you can still load
web-vitals
via<script>
tags from a CDN like unpkg.com. See the loadweb-vitals
from a CDN usage example below for details.
There are a few different builds of the web-vitals
library, and how you load the library depends on which build you want to use.
For details on the difference between the builds, see which build is right for you.
1. The "standard" build
To load the "standard" build, import modules from the web-vitals
package in your application code (as you would with any npm package and node-based build tool):
1import {onLCP, onINP, onCLS} from 'web-vitals'; 2 3onCLS(console.log); 4onINP(console.log); 5onLCP(console.log);
[!NOTE] In version 2, these functions were named
getXXX()
rather thanonXXX()
. They've been renamed in version 3 to reduce confusion (see #217 for details) and will continue to be available using thegetXXX()
until at least version 4. Users are encouraged to switch to the new names, though, for future compatibility.
2. The "attribution" build
Measuring the Web Vitals scores for your real users is a great first step toward optimizing the user experience. But if your scores aren't good, the next step is to understand why they're not good and work to improve them.
The "attribution" build helps you do that by including additional diagnostic information with each metric to help you identify the root cause of poor performance as well as prioritize the most important things to fix.
The "attribution" build is slightly larger than the "standard" build (by about 600 bytes, brotli'd), so while the code size is still small, it's only recommended if you're actually using these features.
To load the "attribution" build, change any import
statements that reference web-vitals
to web-vitals/attribution
:
1- import {onLCP, onINP, onCLS} from 'web-vitals'; 2+ import {onLCP, onINP, onCLS} from 'web-vitals/attribution';
Usage for each of the imported function is identical to the standard build, but when importing from the attribution build, the metric objects will contain an additional attribution
property.
See Send attribution data for usage examples, and the attribution
reference for details on what values are added for each metric.
The recommended way to use the web-vitals
package is to install it from npm and integrate it into your build process. However, if you're not using npm, it's still possible to use web-vitals
by requesting it from a CDN that serves npm package files.
The following examples show how to load web-vitals
from unpkg.com. It is also possible to load this from jsDelivr, and cdnjs.
Important! The unpkg.com, jsDelivr, and cdnjs CDNs are shown here for example purposes only. unpkg.com
, jsDelivr
, and cdnjs
are not affiliated with Google, and there are no guarantees that loading the library from those CDNs will continue to work in the future. Self-hosting the built files rather than loading from the CDN is better for security, reliability, and performance reasons.
Load the "standard" build (using a module script)
1<!-- Append the `?module` param to load the module version of `web-vitals` --> 2<script type="module"> 3 import {onCLS, onINP, onLCP} from 'https://unpkg.com/web-vitals@4?module'; 4 5 onCLS(console.log); 6 onINP(console.log); 7 onLCP(console.log); 8</script>
Load the "standard" build (using a classic script)
1<script>
2 (function () {
3 var script = document.createElement('script');
4 script.src = 'https://unpkg.com/web-vitals@4/dist/web-vitals.iife.js';
5 script.onload = function () {
6 // When loading `web-vitals` using a classic script, all the public
7 // methods can be found on the `webVitals` global namespace.
8 webVitals.onCLS(console.log);
9 webVitals.onINP(console.log);
10 webVitals.onLCP(console.log);
11 };
12 document.head.appendChild(script);
13 })();
14</script>
Load the "attribution" build (using a module script)
1<!-- Append the `?module` param to load the module version of `web-vitals` --> 2<script type="module"> 3 import { 4 onCLS, 5 onINP, 6 onLCP, 7 } from 'https://unpkg.com/web-vitals@4/dist/web-vitals.attribution.js?module'; 8 9 onCLS(console.log); 10 onINP(console.log); 11 onLCP(console.log); 12</script>
Load the "attribution" build (using a classic script)
1<script>
2 (function () {
3 var script = document.createElement('script');
4 script.src =
5 'https://unpkg.com/web-vitals@4/dist/web-vitals.attribution.iife.js';
6 script.onload = function () {
7 // When loading `web-vitals` using a classic script, all the public
8 // methods can be found on the `webVitals` global namespace.
9 webVitals.onCLS(console.log);
10 webVitals.onINP(console.log);
11 webVitals.onLCP(console.log);
12 };
13 document.head.appendChild(script);
14 })();
15</script>
Each of the Web Vitals metrics is exposed as a single function that takes a callback
function that will be called any time the metric value is available and ready to be reported.
The following example measures each of the Core Web Vitals metrics and logs the result to the console once its value is ready to report.
(The examples below import the "standard" build, but they will work with the "attribution" build as well.)
1import {onCLS, onINP, onLCP} from 'web-vitals'; 2 3onCLS(console.log); 4onINP(console.log); 5onLCP(console.log);
Note that some of these metrics will not report until the user has interacted with the page, switched tabs, or the page starts to unload. If you don't see the values logged to the console immediately, try reloading the page (with preserve log enabled) or switching tabs and then switching back.
Also, in some cases a metric callback may never be called:
In other cases, a metric callback may be called more than once:
visibilityState
changes to hidden.[!WARNING] Do not call any of the Web Vitals functions (e.g.
onCLS()
,onINP()
,onLCP()
) more than once per page load. Each of these functions creates aPerformanceObserver
instance and registers event listeners for the lifetime of the page. While the overhead of calling these functions once is negligible, calling them repeatedly on the same page may eventually result in a memory leak.
In most cases, you only want the callback
function to be called when the metric is ready to be reported. However, it is possible to report every change (e.g. each larger layout shift as it happens) by setting reportAllChanges
to true
in the optional, configuration object (second parameter).
[!IMPORTANT]
reportAllChanges
only reports when the metric changes, not for each input to the metric. For example, a new layout shift that does not increase the CLS metric will not be reported even withreportAllChanges
set totrue
because the CLS metric has not changed. Similarly, for INP, each interaction is not reported even withreportAllChanges
set totrue
—just when an interaction causes an increase to INP.
This can be useful when debugging, but in general using reportAllChanges
is not needed (or recommended) for measuring these metrics in production.
1import {onCLS} from 'web-vitals'; 2 3// Logs CLS as the value changes. 4onCLS(console.log, {reportAllChanges: true});
Some analytics providers allow you to update the value of a metric, even after you've already sent it to their servers (overwriting the previously-sent value with the same id
).
Other analytics providers, however, do not allow this, so instead of reporting the new value, you need to report only the delta (the difference between the current value and the last-reported value). You can then compute the total value by summing all metric deltas sent with the same ID.
The following example shows how to use the id
and delta
properties:
1import {onCLS, onINP, onLCP} from 'web-vitals'; 2 3function logDelta({name, id, delta}) { 4 console.log(`${name} matching ID ${id} changed by ${delta}`); 5} 6 7onCLS(logDelta); 8onINP(logDelta); 9onLCP(logDelta);
[!NOTE] The first time the
callback
function is called, itsvalue
anddelta
properties will be the same.
In addition to using the id
field to group multiple deltas for the same metric, it can also be used to differentiate different metrics reported on the same page. For example, after a back/forward cache restore, a new metric object is created with a new id
(since back/forward cache restores are considered separate page visits).
The following example measures each of the Core Web Vitals metrics and reports them to a hypothetical /analytics
endpoint, as soon as each is ready to be sent.
The sendToAnalytics()
function uses the navigator.sendBeacon()
method (if available), but falls back to the fetch()
API when not.
1import {onCLS, onINP, onLCP} from 'web-vitals'; 2 3function sendToAnalytics(metric) { 4 // Replace with whatever serialization method you prefer. 5 // Note: JSON.stringify will likely include more data than you need. 6 const body = JSON.stringify(metric); 7 8 // Use `navigator.sendBeacon()` if available, falling back to `fetch()`. 9 (navigator.sendBeacon && navigator.sendBeacon('/analytics', body)) || 10 fetch('/analytics', {body, method: 'POST', keepalive: true}); 11} 12 13onCLS(sendToAnalytics); 14onINP(sendToAnalytics); 15onLCP(sendToAnalytics);
Google Analytics does not support reporting metric distributions in any of its built-in reports; however, if you set a unique event parameter value (in this case, the metric_id, as shown in the example below) on every metric instance that you send to Google Analytics, you can create a report yourself by first getting the data via the Google Analytics Data API or via BigQuery export and then visualizing it any charting library you choose.
Google Analytics 4 introduces a new Event model allowing custom parameters instead of a fixed category, action, and label. It also supports non-integer values, making it easier to measure Web Vitals metrics compared to previous versions.
1import {onCLS, onINP, onLCP} from 'web-vitals'; 2 3function sendToGoogleAnalytics({name, delta, value, id}) { 4 // Assumes the global `gtag()` function exists, see: 5 // https://developers.google.com/analytics/devguides/collection/ga4 6 gtag('event', name, { 7 // Built-in params: 8 value: delta, // Use `delta` so the value can be summed. 9 // Custom params: 10 metric_id: id, // Needed to aggregate events. 11 metric_value: value, // Optional. 12 metric_delta: delta, // Optional. 13 14 // OPTIONAL: any additional params or debug info here. 15 // See: https://web.dev/articles/debug-performance-in-the-field 16 // metric_rating: 'good' | 'needs-improvement' | 'poor', 17 // debug_info: '...', 18 // ... 19 }); 20} 21 22onCLS(sendToGoogleAnalytics); 23onINP(sendToGoogleAnalytics); 24onLCP(sendToGoogleAnalytics);
For details on how to query this data in BigQuery, or visualise it in Looker Studio, see Measure and debug performance with Google Analytics 4 and BigQuery.
While web-vitals
can be called directly from Google Tag Manager, using a pre-defined custom template makes this considerably easier. Some recommended templates include:
When using the attribution build, you can send additional data to help you debug why the metric values are they way they are.
This example sends an additional debug_target
param to Google Analytics, corresponding to the element most associated with each metric.
1import {onCLS, onINP, onLCP} from 'web-vitals/attribution'; 2 3function sendToGoogleAnalytics({name, delta, value, id, attribution}) { 4 const eventParams = { 5 // Built-in params: 6 value: delta, // Use `delta` so the value can be summed. 7 // Custom params: 8 metric_id: id, // Needed to aggregate events. 9 metric_value: value, // Optional. 10 metric_delta: delta, // Optional. 11 }; 12 13 switch (name) { 14 case 'CLS': 15 eventParams.debug_target = attribution.largestShiftTarget; 16 break; 17 case 'INP': 18 eventParams.debug_target = attribution.interactionTarget; 19 break; 20 case 'LCP': 21 eventParams.debug_target = attribution.element; 22 break; 23 } 24 25 // Assumes the global `gtag()` function exists, see: 26 // https://developers.google.com/analytics/devguides/collection/ga4 27 gtag('event', name, eventParams); 28} 29 30onCLS(sendToGoogleAnalytics); 31onINP(sendToGoogleAnalytics); 32onLCP(sendToGoogleAnalytics);
[!NOTE] This example relies on custom event parameters in Google Analytics 4.
See Debug performance in the field for more information and examples.
Rather than reporting each individual Web Vitals metric separately, you can minimize your network usage by batching multiple metric reports together in a single network request.
However, since not all Web Vitals metrics become available at the same time, and since not all metrics are reported on every page, you cannot simply defer reporting until all metrics are available.
Instead, you should keep a queue of all metrics that were reported and flush the queue whenever the page is backgrounded or unloaded:
1import {onCLS, onINP, onLCP} from 'web-vitals'; 2 3const queue = new Set(); 4function addToQueue(metric) { 5 queue.add(metric); 6} 7 8function flushQueue() { 9 if (queue.size > 0) { 10 // Replace with whatever serialization method you prefer. 11 // Note: JSON.stringify will likely include more data than you need. 12 const body = JSON.stringify([...queue]); 13 14 // Use `navigator.sendBeacon()` if available, falling back to `fetch()`. 15 (navigator.sendBeacon && navigator.sendBeacon('/analytics', body)) || 16 fetch('/analytics', {body, method: 'POST', keepalive: true}); 17 18 queue.clear(); 19 } 20} 21 22onCLS(addToQueue); 23onINP(addToQueue); 24onLCP(addToQueue); 25 26// Report all available metrics whenever the page is backgrounded or unloaded. 27addEventListener('visibilitychange', () => { 28 if (document.visibilityState === 'hidden') { 29 flushQueue(); 30 } 31});
[!NOTE] See the Page Lifecycle guide for an explanation of why
visibilitychange
is recommended over events likebeforeunload
andunload
.
The web-vitals
package includes both "standard" and "attribution" builds, as well as different formats of each to allow developers to choose the format that best meets their needs or integrates with their architecture.
The following table lists all the builds distributed with the web-vitals
package on npm.
Filename (all within dist/* )
| Export | Description |
web-vitals.js | pkg.module |
An ES module bundle of all metric functions, without any attribution features. This is the "standard" build and is the simplest way to consume this library out of the box. |
web-vitals.umd.cjs | pkg.main |
A UMD version of the web-vitals.js bundle (exposed on the self.webVitals.* namespace).
|
web-vitals.iife.js | -- |
An IIFE version of the web-vitals.js bundle (exposed on the self.webVitals.* namespace).
|
web-vitals.attribution.js | -- | An ES module version of all metric functions that includes attribution features. |
web-vitals.attribution.umd.cjs | -- |
A UMD version of the web-vitals.attribution.js build (exposed on the self.webVitals.* namespace).
|
web-vitals.attribution.iife.js | -- |
An IIFE version of the web-vitals.attribution.js build (exposed on the self.webVitals.* namespace).
|
Most developers will generally want to use "standard" build (via either the ES module or UMD version, depending on your bundler/build system), as it's the easiest to use out of the box and integrate into existing tools.
However, if you'd lke to collect additional debug information to help you diagnose performance bottlenecks based on real-user issues, use the "attribution" build.
For guidance on how to collect and use real-user data to debug performance issues, see Debug performance in the field.
Metric
All metrics types inherit from the following base interface:
1interface Metric { 2 /** 3 * The name of the metric (in acronym form). 4 */ 5 name: 'CLS' | 'FCP' | 'FID' | 'INP' | 'LCP' | 'TTFB'; 6 7 /** 8 * The current value of the metric. 9 */ 10 value: number; 11 12 /** 13 * The rating as to whether the metric value is within the "good", 14 * "needs improvement", or "poor" thresholds of the metric. 15 */ 16 rating: 'good' | 'needs-improvement' | 'poor'; 17 18 /** 19 * The delta between the current value and the last-reported value. 20 * On the first report, `delta` and `value` will always be the same. 21 */ 22 delta: number; 23 24 /** 25 * A unique ID representing this particular metric instance. This ID can 26 * be used by an analytics tool to dedupe multiple values sent for the same 27 * metric instance, or to group multiple deltas together and calculate a 28 * total. It can also be used to differentiate multiple different metric 29 * instances sent from the same page, which can happen if the page is 30 * restored from the back/forward cache (in that case new metrics object 31 * get created). 32 */ 33 id: string; 34 35 /** 36 * Any performance entries relevant to the metric value calculation. 37 * The array may also be empty if the metric value was not based on any 38 * entries (e.g. a CLS value of 0 given no layout shifts). 39 */ 40 entries: PerformanceEntry[]; 41 42 /** 43 * The type of navigation. 44 * 45 * This will be the value returned by the Navigation Timing API (or 46 * `undefined` if the browser doesn't support that API), with the following 47 * exceptions: 48 * - 'back-forward-cache': for pages that are restored from the bfcache. 49 * - 'back_forward' is renamed to 'back-forward' for consistency. 50 * - 'prerender': for pages that were prerendered. 51 * - 'restore': for pages that were discarded by the browser and then 52 * restored by the user. 53 */ 54 navigationType: 55 | 'navigate' 56 | 'reload' 57 | 'back-forward' 58 | 'back-forward-cache' 59 | 'prerender' 60 | 'restore'; 61}
Metric-specific subclasses:
CLSMetric
1interface CLSMetric extends Metric { 2 name: 'CLS'; 3 entries: LayoutShift[]; 4}
FCPMetric
1interface FCPMetric extends Metric { 2 name: 'FCP'; 3 entries: PerformancePaintTiming[]; 4}
FIDMetric
[!CAUTION] This interface is deprecated and will be removed in the next major release.
1interface FIDMetric extends Metric { 2 name: 'FID'; 3 entries: PerformanceEventTiming[]; 4}
INPMetric
1interface INPMetric extends Metric { 2 name: 'INP'; 3 entries: PerformanceEventTiming[]; 4}
LCPMetric
1interface LCPMetric extends Metric { 2 name: 'LCP'; 3 entries: LargestContentfulPaint[]; 4}
TTFBMetric
1interface TTFBMetric extends Metric { 2 name: 'TTFB'; 3 entries: PerformanceNavigationTiming[]; 4}
MetricRatingThresholds
The thresholds of metric's "good", "needs improvement", and "poor" ratings.
Metric value | Rating |
---|---|
≦ [0] | "good" |
> [0] and ≦ [1] | "needs improvement" |
> [1] | "poor" |
1type MetricRatingThresholds = [number, number];
See also Rating Thresholds.
ReportOpts
1interface ReportOpts { 2 reportAllChanges?: boolean; 3 durationThreshold?: number; 4}
LoadState
The LoadState
type is used in several of the metric attribution objects.
1/** 2 * The loading state of the document. Note: this value is similar to 3 * `document.readyState` but it subdivides the "interactive" state into the 4 * time before and after the DOMContentLoaded event fires. 5 * 6 * State descriptions: 7 * - `loading`: the initial document response has not yet been fully downloaded 8 * and parsed. This is equivalent to the corresponding `readyState` value. 9 * - `dom-interactive`: the document has been fully loaded and parsed, but 10 * scripts may not have yet finished loading and executing. 11 * - `dom-content-loaded`: the document is fully loaded and parsed, and all 12 * scripts (except `async` scripts) have loaded and finished executing. 13 * - `complete`: the document and all of its sub-resources have finished 14 * loading. This is equivalent to the corresponding `readyState` value. 15 */ 16type LoadState = 17 | 'loading' 18 | 'dom-interactive' 19 | 'dom-content-loaded' 20 | 'complete';
onCLS()
1function onCLS(callback: (metric: CLSMetric) => void, opts?: ReportOpts): void;
Calculates the CLS value for the current page and calls the callback
function once the value is ready to be reported, along with all layout-shift
performance entries that were used in the metric value calculation. The reported value is a double (corresponding to a layout shift score).
If the reportAllChanges
configuration option is set to true
, the callback
function will be called as soon as the value is initially determined as well as any time the value changes throughout the page lifespan (Note not necessarily for every layout shift).
[!IMPORTANT] CLS should be continually monitored for changes throughout the entire lifespan of a page—including if the user returns to the page after it's been hidden/backgrounded. However, since browsers often will not fire additional callbacks once the user has backgrounded a page,
callback
is always called when the page's visibility state changes to hidden. As a result, thecallback
function might be called multiple times during the same page load (see Reporting only the delta of changes for how to manage this).
onFCP()
1function onFCP(callback: (metric: FCPMetric) => void, opts?: ReportOpts): void;
Calculates the FCP value for the current page and calls the callback
function once the value is ready, along with the relevant paint
performance entry used to determine the value. The reported value is a DOMHighResTimeStamp
.
onFID()
[!CAUTION] This function is deprecated and will be removed in the next major release.
1function onFID(callback: (metric: FIDMetric) => void, opts?: ReportOpts): void;
Calculates the FID value for the current page and calls the callback
function once the value is ready, along with the relevant first-input
performance entry used to determine the value. The reported value is a DOMHighResTimeStamp
.
[!IMPORTANT] Since FID is only reported after the user interacts with the page, it's possible that it will not be reported for some page loads.
onINP()
1function onINP(callback: (metric: INPMetric) => void, opts?: ReportOpts): void;
Calculates the INP value for the current page and calls the callback
function once the value is ready, along with the event
performance entries reported for that interaction. The reported value is a DOMHighResTimeStamp
.
A custom durationThreshold
configuration option can optionally be passed to control what event-timing
entries are considered for INP reporting. The default threshold is 40
, which means INP scores of less than 40 are reported as 0. Note that this will not affect your 75th percentile INP value unless that value is also less than 40 (well below the recommended good threshold).
If the reportAllChanges
configuration option is set to true
, the callback
function will be called as soon as the value is initially determined as well as any time the value changes throughout the page lifespan (Note not necessarily for every interaction).
[!IMPORTANT] INP should be continually monitored for changes throughout the entire lifespan of a page—including if the user returns to the page after it's been hidden/backgrounded. However, since browsers often will not fire additional callbacks once the user has backgrounded a page,
callback
is always called when the page's visibility state changes to hidden. As a result, thecallback
function might be called multiple times during the same page load (see Reporting only the delta of changes for how to manage this).
onLCP()
1function onLCP(callback: (metric: LCPMetric) => void, opts?: ReportOpts): void;
Calculates the LCP value for the current page and calls the callback
function once the value is ready (along with the relevant largest-contentful-paint
performance entry used to determine the value). The reported value is a DOMHighResTimeStamp
.
If the reportAllChanges
configuration option is set to true
, the callback
function will be called any time a new largest-contentful-paint
performance entry is dispatched, or once the final value of the metric has been determined.
onTTFB()
1function onTTFB( 2 callback: (metric: TTFBMetric) => void, 3 opts?: ReportOpts, 4): void;
Calculates the TTFB value for the current page and calls the callback
function once the page has loaded, along with the relevant navigation
performance entry used to determine the value. The reported value is a DOMHighResTimeStamp
.
Note, this function waits until after the page is loaded to call callback
in order to ensure all properties of the navigation
entry are populated. This is useful if you want to report on other metrics exposed by the Navigation Timing API.
For example, the TTFB metric starts from the page's time origin, which means it includes time spent on DNS lookup, connection negotiation, network latency, and server processing time.
1import {onTTFB} from 'web-vitals'; 2 3onTTFB((metric) => { 4 // Calculate the request time by subtracting from TTFB 5 // everything that happened prior to the request starting. 6 const requestTime = metric.value - metric.entries[0].requestStart; 7 console.log('Request time:', requestTime); 8});
[!NOTE] Browsers that do not support
navigation
entries will fall back to usingperformance.timing
(with the timestamps converted from epoch time toDOMHighResTimeStamp
). This ensures code referencing these values (like in the example above) will work the same in all browsers.
The thresholds of each metric's "good", "needs improvement", and "poor" ratings are available as MetricRatingThresholds
.
Example:
1import {CLSThresholds, INPThresholds, LCPThresholds} from 'web-vitals'; 2 3console.log(CLSThresholds); // [ 0.1, 0.25 ] 4console.log(INPThresholds); // [ 200, 500 ] 5console.log(LCPThresholds); // [ 2500, 4000 ]
[!NOTE] It's typically not necessary (or recommended) to manually calculate metric value ratings using these thresholds. Use the
Metric['rating']
instead.
The following objects contain potentially-helpful debugging information that can be sent along with the metric values for the current page visit in order to help identify issues happening to real-users in the field.
When using the attribution build, these objects are found as an attribution
property on each metric.
See the attribution build section for details on how to use this feature.
CLSAttribution
1interface CLSAttribution { 2 /** 3 * A selector identifying the first element (in document order) that 4 * shifted when the single largest layout shift contributing to the page's 5 * CLS score occurred. 6 */ 7 largestShiftTarget?: string; 8 /** 9 * The time when the single largest layout shift contributing to the page's 10 * CLS score occurred. 11 */ 12 largestShiftTime?: DOMHighResTimeStamp; 13 /** 14 * The layout shift score of the single largest layout shift contributing to 15 * the page's CLS score. 16 */ 17 largestShiftValue?: number; 18 /** 19 * The `LayoutShiftEntry` representing the single largest layout shift 20 * contributing to the page's CLS score. (Useful when you need more than just 21 * `largestShiftTarget`, `largestShiftTime`, and `largestShiftValue`). 22 */ 23 largestShiftEntry?: LayoutShift; 24 /** 25 * The first element source (in document order) among the `sources` list 26 * of the `largestShiftEntry` object. (Also useful when you need more than 27 * just `largestShiftTarget`, `largestShiftTime`, and `largestShiftValue`). 28 */ 29 largestShiftSource?: LayoutShiftAttribution; 30 /** 31 * The loading state of the document at the time when the largest layout 32 * shift contribution to the page's CLS score occurred (see `LoadState` 33 * for details). 34 */ 35 loadState?: LoadState; 36}
FCPAttribution
1interface FCPAttribution { 2 /** 3 * The time from when the user initiates loading the page until when the 4 * browser receives the first byte of the response (a.k.a. TTFB). 5 */ 6 timeToFirstByte: number; 7 /** 8 * The delta between TTFB and the first contentful paint (FCP). 9 */ 10 firstByteToFCP: number; 11 /** 12 * The loading state of the document at the time when FCP `occurred (see 13 * `LoadState` for details). Ideally, documents can paint before they finish 14 * loading (e.g. the `loading` or `dom-interactive` phases). 15 */ 16 loadState: LoadState; 17 /** 18 * The `PerformancePaintTiming` entry corresponding to FCP. 19 */ 20 fcpEntry?: PerformancePaintTiming; 21 /** 22 * The `navigation` entry of the current page, which is useful for diagnosing 23 * general page load issues. This can be used to access `serverTiming` for example: 24 * navigationEntry?.serverTiming 25 */ 26 navigationEntry?: PerformanceNavigationTiming; 27}
FIDAttribution
[!CAUTION] This interface is deprecated and will be removed in the next major release.
1interface FIDAttribution { 2 /** 3 * A selector identifying the element that the user interacted with. This 4 * element will be the `target` of the `event` dispatched. 5 */ 6 eventTarget: string; 7 /** 8 * The time when the user interacted. This time will match the `timeStamp` 9 * value of the `event` dispatched. 10 */ 11 eventTime: number; 12 /** 13 * The `type` of the `event` dispatched from the user interaction. 14 */ 15 eventType: string; 16 /** 17 * The `PerformanceEventTiming` entry corresponding to FID. 18 */ 19 eventEntry: PerformanceEventTiming; 20 /** 21 * The loading state of the document at the time when the first interaction 22 * occurred (see `LoadState` for details). If the first interaction occurred 23 * while the document was loading and executing script (e.g. usually in the 24 * `dom-interactive` phase) it can result in long input delays. 25 */ 26 loadState: LoadState; 27}
INPAttribution
1interface INPAttribution { 2 /** 3 * A selector identifying the element that the user first interacted with 4 * as part of the frame where the INP candidate interaction occurred. 5 * If this value is an empty string, that generally means the element was 6 * removed from the DOM after the interaction. 7 */ 8 interactionTarget: string; 9 /** 10 * A reference to the HTML element identified by `interactionTarget`. 11 * NOTE: for attribution purpose, a selector identifying the element is 12 * typically more useful than the element itself. However, the element is 13 * also made available in case additional context is needed. 14 */ 15 interactionTargetElement: Node | undefined; 16 /** 17 * The time when the user first interacted during the frame where the INP 18 * candidate interaction occurred (if more than one interaction occurred 19 * within the frame, only the first time is reported). 20 */ 21 interactionTime: DOMHighResTimeStamp; 22 /** 23 * The best-guess timestamp of the next paint after the interaction. 24 * In general, this timestamp is the same as the `startTime + duration` of 25 * the event timing entry. However, since `duration` values are rounded to 26 * the nearest 8ms, it can sometimes appear that the paint occurred before 27 * processing ended (which cannot happen). This value clamps the paint time 28 * so it's always after `processingEnd` from the Event Timing API and 29 * `renderStart` from the Long Animation Frame API (where available). 30 * It also averages the duration values for all entries in the same 31 * animation frame, which should be closer to the "real" value. 32 */ 33 nextPaintTime: DOMHighResTimeStamp; 34 /** 35 * The type of interaction, based on the event type of the `event` entry 36 * that corresponds to the interaction (i.e. the first `event` entry 37 * containing an `interactionId` dispatched in a given animation frame). 38 * For "pointerdown", "pointerup", or "click" events this will be "pointer", 39 * and for "keydown" or "keyup" events this will be "keyboard". 40 */ 41 interactionType: 'pointer' | 'keyboard'; 42 /** 43 * An array of Event Timing entries that were processed within the same 44 * animation frame as the INP candidate interaction. 45 */ 46 processedEventEntries: PerformanceEventTiming[]; 47 /** 48 * If the browser supports the Long Animation Frame API, this array will 49 * include any `long-animation-frame` entries that intersect with the INP 50 * candidate interaction's `startTime` and the `processingEnd` time of the 51 * last event processed within that animation frame. If the browser does not 52 * support the Long Animation Frame API or no `long-animation-frame` entries 53 * are detect, this array will be empty. 54 */ 55 longAnimationFrameEntries: PerformanceLongAnimationFrameTiming[]; 56 /** 57 * The time from when the user interacted with the page until when the 58 * browser was first able to start processing event listeners for that 59 * interaction. This time captures the delay before event processing can 60 * begin due to the main thread being busy with other work. 61 */ 62 inputDelay: number; 63 /** 64 * The time from when the first event listener started running in response to 65 * the user interaction until when all event listener processing has finished. 66 */ 67 processingDuration: number; 68 /** 69 * The time from when the browser finished processing all event listeners for 70 * the user interaction until the next frame is presented on the screen and 71 * visible to the user. This time includes work on the main thread (such as 72 * `requestAnimationFrame()` callbacks, `ResizeObserver` and 73 * `IntersectionObserver` callbacks, and style/layout calculation) as well 74 * as off-main-thread work (such as compositor, GPU, and raster work). 75 */ 76 presentationDelay: number; 77 /** 78 * The loading state of the document at the time when the interaction 79 * corresponding to INP occurred (see `LoadState` for details). If the 80 * interaction occurred while the document was loading and executing script 81 * (e.g. usually in the `dom-interactive` phase) it can result in long delays. 82 */ 83 loadState: LoadState; 84}
LCPAttribution
1interface LCPAttribution { 2 /** 3 * The element corresponding to the largest contentful paint for the page. 4 */ 5 element?: string; 6 /** 7 * The URL (if applicable) of the LCP image resource. If the LCP element 8 * is a text node, this value will not be set. 9 */ 10 url?: string; 11 /** 12 * The time from when the user initiates loading the page until when the 13 * browser receives the first byte of the response (a.k.a. TTFB). See 14 * [Optimize LCP](https://web.dev/articles/optimize-lcp) for details. 15 */ 16 timeToFirstByte: number; 17 /** 18 * The delta between TTFB and when the browser starts loading the LCP 19 * resource (if there is one, otherwise 0). See [Optimize 20 * LCP](https://web.dev/articles/optimize-lcp) for details. 21 */ 22 resourceLoadDelay: number; 23 /** 24 * The total time it takes to load the LCP resource itself (if there is one, 25 * otherwise 0). See [Optimize LCP](https://web.dev/articles/optimize-lcp) for 26 * details. 27 */ 28 resourceLoadDuration: number; 29 /** 30 * The delta between when the LCP resource finishes loading until the LCP 31 * element is fully rendered. See [Optimize 32 * LCP](https://web.dev/articles/optimize-lcp) for details. 33 */ 34 elementRenderDelay: number; 35 /** 36 * The `navigation` entry of the current page, which is useful for diagnosing 37 * general page load issues. This can be used to access `serverTiming` for example: 38 * navigationEntry?.serverTiming 39 */ 40 navigationEntry?: PerformanceNavigationTiming; 41 /** 42 * The `resource` entry for the LCP resource (if applicable), which is useful 43 * for diagnosing resource load issues. 44 */ 45 lcpResourceEntry?: PerformanceResourceTiming; 46 /** 47 * The `LargestContentfulPaint` entry corresponding to LCP. 48 */ 49 lcpEntry?: LargestContentfulPaint; 50}
TTFBAttribution
1export interface TTFBAttribution { 2 /** 3 * The total time from when the user initiates loading the page to when the 4 * page starts to handle the request. Large values here are typically due 5 * to HTTP redirects, though other browser processing contributes to this 6 * duration as well (so even without redirect it's generally not zero). 7 */ 8 waitingDuration: number; 9 /** 10 * The total time spent checking the HTTP cache for a match. For navigations 11 * handled via service worker, this duration usually includes service worker 12 * start-up time as well as time processing `fetch` event listeners, with 13 * some exceptions, see: https://github.com/w3c/navigation-timing/issues/199 14 */ 15 cacheDuration: number; 16 /** 17 * The total time to resolve the DNS for the requested domain. 18 */ 19 dnsDuration: number; 20 /** 21 * The total time to create the connection to the requested domain. 22 */ 23 connectionDuration: number; 24 /** 25 * The total time from when the request was sent until the first byte of the 26 * response was received. This includes network time as well as server 27 * processing time. 28 */ 29 requestDuration: number; 30 /** 31 * The `navigation` entry of the current page, which is useful for diagnosing 32 * general page load issues. This can be used to access `serverTiming` for 33 * example: navigationEntry?.serverTiming 34 */ 35 navigationEntry?: PerformanceNavigationTiming; 36}
The web-vitals
code has been tested and will run without error in all major browsers as well as Internet Explorer back to version 9. However, some of the APIs required to capture these metrics are currently only available in Chromium-based browsers (e.g. Chrome, Edge, Opera, Samsung Internet).
Browser support for each function is as follows:
onCLS()
: ChromiumonFCP()
: Chromium, Firefox, SafarionFID()
: Chromium, Firefox (Deprecated)onINP()
: ChromiumonLCP()
: Chromium, FirefoxonTTFB()
: Chromium, Firefox, SafariThe web-vitals
library is primarily a wrapper around the Web APIs that measure the Web Vitals metrics, which means the limitations of those APIs will mostly apply to this library as well. More details on these limitations is available in this blog post.
The primary limitation of these APIs is they have no visibility into <iframe>
content (not even same-origin iframes), which means pages that make use of iframes will likely see a difference between the data measured by this library and the data available in the Chrome User Experience Report (which does include iframe content).
For same-origin iframes, it's possible to use the web-vitals
library to measure metrics, but it's tricky because it requires the developer to add the library to every frame and postMessage()
the results to the parent frame for aggregation.
[!NOTE] Given the lack of iframe support, the
onCLS()
function technically measures DCLS (Document Cumulative Layout Shift) rather than CLS, if the page includes iframes).
The web-vitals
source code is written in TypeScript. To transpile the code and build the production bundles, run the following command.
1npm run build
To build the code and watch for changes, run:
1npm run watch
The web-vitals
code is tested in real browsers using webdriver.io. Use the following command to run the tests:
1npm test
To test any of the APIs manually, you can start the test server
1npm run test:server
Then navigate to http://localhost:9090/test/<view>
, where <view>
is the basename of one the templates under /test/views/.
You'll likely want to combine this with npm run watch
to ensure any changes you make are transpiled and rebuilt.
web-vitals-reporter
: JavaScript library to batch callback
functions and send data with a single request.No vulnerabilities found.
Reason
no dangerous workflow patterns detected
Reason
6 commit(s) and 24 issue activity found in the last 90 days -- score normalized to 10
Reason
no binaries found in the repo
Reason
license file detected
Details
Reason
SAST tool is not run on all commits -- score normalized to 8
Details
Reason
Found 10/27 approved changesets -- score normalized to 3
Reason
7 existing vulnerabilities detected
Details
Reason
detected GitHub workflow tokens with excessive permissions
Details
Reason
dependency not pinned by hash detected -- score normalized to 0
Details
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
project is not fuzzed
Details
Reason
security policy file not detected
Details
Reason
branch protection not enabled on development/release branches
Details
Score
Last Scanned on 2024-11-25
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More