Gathering detailed insights and metrics for playwright-lighthouse
Gathering detailed insights and metrics for playwright-lighthouse
Gathering detailed insights and metrics for playwright-lighthouse
Gathering detailed insights and metrics for playwright-lighthouse
npm install playwright-lighthouse
Module System
Min. Node Version
Typescript Support
Node Version
NPM Version
250 Stars
134 Commits
28 Forks
4 Watching
8 Branches
12 Contributors
Updated on 26 Nov 2024
JavaScript (100%)
Cumulative downloads
Total Downloads
Last day
-11.1%
14,375
Compared to previous day
Last week
-0.3%
83,486
Compared to previous week
Last month
2.4%
377,902
Compared to previous month
Last year
169.6%
3,084,030
Compared to previous year
2
2
Lighthouse is a tool developed by Google that analyzes web apps and web pages, collecting modern performance metrics and insights on developer best practices.
Playwright is a Node library to automate Chromium, Firefox and WebKit with a single API. Playwright is built to enable cross-browser web automation that is ever-green, capable, reliable and fast.
The purpose of this package is to produce web audit report for several pages in connected mode and in an automated (programmatic) way.
playwright-lighthouse Version | Compatible Lighthouse Version |
---|---|
v3.x.x | 10 |
v4.x.x | 11 |
Add the playwright-lighthouse
, playwright
& lighthouse
libraries to your project:
1$ yarn add -D playwright-lighthouse playwright lighthouse 2# or 3$ npm install --save-dev playwright-lighthouse playwright lighthouse
After completion of the Installation, you can use playwright-lighthouse
in your code to audit the current page.
In your test code you need to import playwright-lighthouse
and assign a port
for the lighthouse scan. You can choose any non-allocated port.
1import { playAudit } from 'playwright-lighthouse'; 2import playwright from 'playwright'; 3 4describe('audit example', () => { 5 it('open browser', async () => { 6 const browser = await playwright['chromium'].launch({ 7 args: ['--remote-debugging-port=9222'], 8 }); 9 const page = await browser.newPage(); 10 await page.goto('https://angular.io/'); 11 12 await playAudit({ 13 page: page, 14 port: 9222, 15 }); 16 17 await browser.close(); 18 }); 19});
If you don't provide any threshold argument to the playAudit
command, the test will fail if at least one of your metrics is under 100
.
You can make assumptions on the different metrics by passing an object as argument to the playAudit
command:
1import { playAudit } from 'playwright-lighthouse'; 2import playwright from 'playwright'; 3 4describe('audit example', () => { 5 it('open browser', async () => { 6 const browser = await playwright['chromium'].launch({ 7 args: ['--remote-debugging-port=9222'], 8 }); 9 const page = await browser.newPage(); 10 await page.goto('https://angular.io/'); 11 12 await playAudit({ 13 page: page, 14 thresholds: { 15 performance: 50, 16 accessibility: 50, 17 'best-practices': 50, 18 seo: 50, 19 pwa: 50, 20 }, 21 port: 9222, 22 }); 23 24 await browser.close(); 25 }); 26});
If the Lighthouse analysis returns scores that are under the one set in arguments, the test will fail.
You can also make assumptions only on certain metrics. For example, the following test will only verify the "correctness" of the performance
metric:
1await playAudit({ 2 page: page, 3 thresholds: { 4 performance: 85, 5 }, 6 port: 9222, 7});
This test will fail only when the performance
metric provided by Lighthouse will be under 85
.
You can also pass any argument directly to the Lighthouse module using the second and third options of the command:
1const thresholdsConfig = {
2 /* ... */
3};
4
5const lighthouseOptions = {
6 /* ... your lighthouse options */
7};
8
9const lighthouseConfig = {
10 /* ... your lighthouse configs */
11};
12
13await playAudit({
14 thresholds: thresholdsConfig,
15 opts: lighthouseOptions,
16 config: lighthouseConfig,
17
18 /* ... other configurations */
19});
You can pass default lighthouse configs like so:
1import lighthouseDesktopConfig from 'lighthouse/lighthouse-core/config/lr-desktop-config';
2
3await playAudit({
4 thresholds: thresholdsConfig,
5 opts: lighthouseOptions,
6 config: lighthouseDesktopConfig,
7
8 /* ... other configurations */
9});
Sometimes it's important to pass a parameter disableStorageReset as false. You can easily make it like this:
1const opts = { 2 disableStorageReset: false, 3}; 4 5await playAudit({ 6 page, 7 port: 9222, 8 opts, 9});
Playwright by default does not share any context (eg auth state) between pages. Lighthouse will open a new page and thus any previous authentication steps are void. To persist auth state you need to use a persistent context:
1import os from 'os'; 2import { playAudit } from 'playwright-lighthouse'; 3import { chromium } from 'playwright'; 4 5describe('audit example', () => { 6 it('open browser', async () => { 7 const userDataDir = path.join(os.tmpdir(), 'pw', String(Math.random())); 8 const context = await chromium.launchPersistentContext(userDataDir, { 9 args: ['--remote-debugging-port=9222'], 10 }); 11 const page = await context.newPage(); 12 await page.goto('http://localhost:3000/'); 13 14 // Perform login steps here which will save to cookie or localStorage 15 16 // When lighthouse opens a new page the storage will be persisted meaning the new page will have the same user session 17 await playAudit({ 18 page: page, 19 port: 9222, 20 }); 21 22 await context.close(); 23 }); 24});
Clean up the tmp directories on playwright teardown:
1import rimraf from 'rimraf'; 2import os from 'os'; 3import path from 'path'; 4 5function globalSetup() { 6 return () => { 7 const tmpDirPath = path.join(os.tmpdir(), 'pw'); 8 rimraf(tmpDirPath, console.log); 9 }; 10} 11 12export default globalSetup;
1import { chromium } from 'playwright'; 2import type { Browser } from 'playwright'; 3import { playAudit } from 'playwright-lighthouse'; 4import { test as base } from '@playwright/test'; 5import getPort from 'get-port'; 6 7export const lighthouseTest = base.extend< 8 {}, 9 { port: number; browser: Browser } 10>({ 11 port: [ 12 async ({}, use) => { 13 // Assign a unique port for each playwright worker to support parallel tests 14 const port = await getPort(); 15 await use(port); 16 }, 17 { scope: 'worker' }, 18 ], 19 20 browser: [ 21 async ({ port }, use) => { 22 const browser = await chromium.launch({ 23 args: [`--remote-debugging-port=${port}`], 24 }); 25 await use(browser); 26 }, 27 { scope: 'worker' }, 28 ], 29}); 30 31lighthouseTest.describe('Lighthouse', () => { 32 lighthouseTest('should pass lighthouse tests', async ({ page, port }) => { 33 await page.goto('http://example.com'); 34 await page.waitForSelector('#some-element'); 35 await playAudit({ 36 page, 37 port, 38 }); 39 }); 40});
1import os from 'os'; 2import getPort from 'get-port'; 3import { BrowserContext, chromium, Page } from 'playwright'; 4import { test as base } from '@playwright/test'; 5import { playAudit } from 'playwright-lighthouse'; 6 7export const lighthouseTest = base.extend< 8 { 9 authenticatedPage: Page; 10 context: BrowserContext; 11 }, 12 { 13 port: number; 14 } 15>({ 16 // We need to assign a unique port for each lighthouse test to allow 17 // lighthouse tests to run in parallel 18 port: [ 19 async ({}, use) => { 20 const port = await getPort(); 21 await use(port); 22 }, 23 { scope: 'worker' }, 24 ], 25 26 // As lighthouse opens a new page, and as playwright does not by default allow 27 // shared contexts, we need to explicitly create a persistent context to 28 // allow lighthouse to run behind authenticated routes. 29 context: [ 30 async ({ port }, use) => { 31 const userDataDir = path.join(os.tmpdir(), 'pw', String(Math.random())); 32 const context = await chromium.launchPersistentContext(userDataDir, { 33 args: [`--remote-debugging-port=${port}`], 34 }); 35 await use(context); 36 await context.close(); 37 }, 38 { scope: 'test' }, 39 ], 40 41 authenticatedPage: [ 42 async ({ context, page }, use) => { 43 // Mock any requests on the entire context 44 await context.route('https://example.com/token', (route) => { 45 return route.fulfill({ 46 status: 200, 47 body: JSON.stringify({ 48 // ... 49 }), 50 headers: { 51 // ... 52 }, 53 }); 54 }); 55 56 await page.goto('http://localhost:3000'); 57 58 // Setup your auth state by inserting cookies or localStorage values 59 await insertAuthState(page); 60 61 await use(page); 62 }, 63 { scope: 'test' }, 64 ], 65}); 66 67lighthouseTest.describe('Authenticated route', () => { 68 lighthouseTest( 69 'should pass lighthouse tests', 70 async ({ port, authenticatedPage: page }) => { 71 await page.goto('http://localhost:3000/my-profile'); 72 await playAudit({ 73 page, 74 port, 75 }); 76 } 77 ); 78});
In case you have a globalSetup
script in your test you might want to reuse saved state instead of running auth before every test.
Additionally, you may pass url
instead of page
to speedup execution and save resources.
1import os from 'os'; 2import path from 'path'; 3import { chromium, test as base } from '@playwright/test'; 4import type { BrowserContext } from '@playwright/test'; 5import getPort from 'get-port'; // version ^5.1.1 due to issues with imports in playwright 1.20.1 6 7export const lighthouseTest = base.extend< 8 { context: BrowserContext }, 9 { port: number } 10>({ 11 port: [ 12 async ({}, use) => { 13 // Assign a unique port for each playwright worker to support parallel tests 14 const port = await getPort(); 15 await use(port); 16 }, 17 { scope: 'worker' }, 18 ], 19 20 context: [ 21 async ({ port, launchOptions }, use) => { 22 const userDataDir = path.join(os.tmpdir(), 'pw', String(Math.random())); 23 const context = await chromium.launchPersistentContext(userDataDir, { 24 args: [ 25 ...(launchOptions.args || []), 26 `--remote-debugging-port=${port}`, 27 ], 28 }); 29 30 // apply state previously saved in the the `globalSetup` 31 await context.addCookies(require('../../state-chrome.json').cookies); 32 33 await use(context); 34 await context.close(); 35 }, 36 { scope: 'test' }, 37 ], 38}); 39 40lighthouseTest.describe('Authenticated route after globalSetup', () => { 41 lighthouseTest('should pass lighthouse tests', async ({ port }) => { 42 // it's possible to pass url directly instead of a page 43 // to avoid opening a page an extra time and keeping it opened 44 await playAudit({ 45 url: 'http://localhost:3000/my-profile', 46 port, 47 }); 48 }); 49});
playwright-lighthouse
library can produce Lighthouse CSV, HTML and JSON audit reports, that you can host in your CI server. These reports can be useful for ongoing audits and monitoring from build to build.
1await playAudit({
2 /* ... other configurations */
3
4 reports: {
5 formats: {
6 json: true, //defaults to false
7 html: true, //defaults to false
8 csv: true, //defaults to false
9 },
10 name: `name-of-the-report`, //defaults to `lighthouse-${new Date().getTime()}`
11 directory: `path/to/directory`, //defaults to `${process.cwd()}/lighthouse`
12 },
13});
Sample HTML report:
playAudit function also provides a promise that resolves with the Lighthouse result object containing the LHR (Lighthouse report in JSON format).
1const lighthouseReport = await playAudit({ 2 /* ... configurations */ 3}); // lightHouse report contains the report results
You can execute Lighthouse reports on LambdaTest platform while executing Playwright tests with the following steps. You can generate multiple lighthouse reports in a single test.
1npm install playwright-lighthouse
1export LIGHTHOUSE_LAMBDATEST='true'
1import { chromium } from 'playwright'; 2import { playAudit } from 'playwright-lighthouse'; 3 4(async () => { 5 let browser, page; 6 try { 7 const capabilities = { 8 browserName: 'Chrome', // Browsers allowed: `Chrome`, `MicrosoftEdge` and `pw-chromium` 9 browserVersion: 'latest', 10 'LT:Options': { 11 platform: 'Windows 11', 12 build: 'Web Performance testing', 13 name: 'Lighthouse test', 14 user: process.env.LT_USERNAME, 15 accessKey: process.env.LT_ACCESS_KEY, 16 network: true, 17 video: true, 18 console: true, 19 }, 20 }; 21 22 browser = await chromium.connect({ 23 wsEndpoint: `wss://cdp.lambdatest.com/playwright?capabilities=${encodeURIComponent(JSON.stringify(capabilities))}`, 24 }); 25 26 page = await browser.newPage(); 27 28 await page.goto('https://duckduckgo.com'); 29 let element = await page.locator('[name="q"]'); 30 await element.click(); 31 await element.type('Playwright'); 32 await element.press('Enter'); 33 34 try { 35 await playAudit({ 36 url: 'https://duckduckgo.com', 37 page: page, 38 thresholds: { 39 performance: 50, 40 accessibility: 50, 41 'best-practices': 50, 42 seo: 50, 43 pwa: 10, 44 }, 45 reports: { 46 formats: { 47 json: true, 48 html: true, 49 csv: true, 50 }, 51 }, 52 }); 53 54 await page.evaluate( 55 (_) => {}, 56 `lambdatest_action: ${JSON.stringify({ action: 'setTestStatus', arguments: { status: 'passed', remark: 'Web performance metrics are are above the thresholds.' } })}` 57 ); 58 } catch (e) { 59 await page.evaluate( 60 (_) => {}, 61 `lambdatest_action: ${JSON.stringify({ action: 'setTestStatus', arguments: { status: 'failed', remark: e.stack } })}` 62 ); 63 console.error(e); 64 } 65 } catch (e) { 66 await page.evaluate( 67 (_) => {}, 68 `lambdatest_action: ${JSON.stringify({ action: 'setTestStatus', arguments: { status: 'failed', remark: e.stack } })}` 69 ); 70 } finally { 71 await page.close(); 72 await browser.close(); 73 } 74})();
You can view your tests on LambdaTest Web Automation dashboard.
you can raise any issue here
If it works for you , give a Star! :star:
- Copyright © 2020- Abhinaba Ghosh
No vulnerabilities found.
Reason
no binaries found in the repo
Reason
no dangerous workflow patterns detected
Reason
license file detected
Details
Reason
6 existing vulnerabilities detected
Details
Reason
Found 5/18 approved changesets -- score normalized to 2
Reason
0 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 0
Reason
detected GitHub workflow tokens with excessive permissions
Details
Reason
dependency not pinned by hash detected -- score normalized to 0
Details
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
project is not fuzzed
Details
Reason
security policy file not detected
Details
Reason
branch protection not enabled on development/release branches
Details
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
Score
Last Scanned on 2024-11-25
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More