Gathering detailed insights and metrics for @storybook/test-runner
Gathering detailed insights and metrics for @storybook/test-runner
Gathering detailed insights and metrics for @storybook/test-runner
Gathering detailed insights and metrics for @storybook/test-runner
@storybook/addon-coverage
Tools to support code coverage in Storybook
storybook-detox-test-runner
This project enables you to test your [Storybook for React Native](https://github.com/storybookjs/react-native) stories using [Detox](https://wix.github.io/Detox/).
@storybook-extras/angular
Storybook addon to add extra configurations for Angular applications.
@sheriffmoose/storybook-ngx
Storybook addon to add extra configurations for Angular applications.
npm install @storybook/test-runner
Typescript
Module System
Min. Node Version
Node Version
NPM Version
TypeScript (86.92%)
JavaScript (9.33%)
MDX (2.56%)
CSS (1.16%)
Shell (0.03%)
Total Downloads
0
Last Day
0
Last Week
0
Last Month
0
Last Year
0
MIT License
262 Stars
923 Commits
82 Forks
10 Watchers
81 Branches
128 Contributors
Updated on Jun 24, 2025
Latest Version
0.23.0
Package Id
@storybook/test-runner@0.23.0
Unpacked Size
2.90 MB
Size
611.74 kB
File Count
16
NPM Version
10.8.2
Node Version
18.20.8
Published on
Jun 11, 2025
Cumulative downloads
Total Downloads
Last Day
0%
NaN
Compared to previous day
Last Week
0%
NaN
Compared to previous week
Last Month
0%
NaN
Compared to previous month
Last Year
0%
NaN
Compared to previous year
18
1
38
Storybook test runner turns all of your stories into executable tests.
See the announcement of Interaction Testing with Storybook in detail in this blog post or watch this video to see it in action.
The Storybook test runner uses Jest as a runner, and Playwright as a testing framework. Each one of your .stories
files is transformed into a spec file, and each story becomes a test, which is run in a headless browser.
The test runner is simple in design – it just visits each story from a running Storybook instance and makes sure the component is not failing:
play
function, it verifies whether the story rendered without any errors. This is essentially a smoke test.play
function, it also checks for errors in the play
function and that all assertions passed. This is essentially an interaction test.If there are any failures, the test runner will provide an output with the error, alongside with a link to the failing story, so you can see the error yourself and debug it directly in the browser:
Use the following table to use the correct version of this package, based on the version of Storybook you're using:
Test runner version | Storybook version |
---|---|
^0.19.0 | ^8.2.0 |
~0.17.0 | ^8.0.0 |
~0.16.0 | ^7.0.0 |
~0.9.4 | ^6.4.0 |
1yarn add @storybook/test-runner -D
test-storybook
script to your package.json1{ 2 "scripts": { 3 "test-storybook": "test-storybook" 4 } 5}
Optionally, follow the documentation for writing interaction tests and using addon-interactions to visualize the interactions with an interactive debugger in Storybook.
Run Storybook (the test runner runs against a running Storybook instance):
1yarn storybook
1yarn test-storybook
[!Note] The runner assumes that your Storybook is running on port
6006
. If you're running Storybook in another port, either use --url or set the TARGET_URL before running your command like:1yarn test-storybook --url http://127.0.0.1:9009 2or 3TARGET_URL=http://127.0.0.1:9009 yarn test-storybook
1Usage: test-storybook [options]
Options | Description |
---|---|
--help | Output usage information test-storybook --help |
-i , --index-json | Run in index json mode. Automatically detected (requires a compatible Storybook) test-storybook --index-json |
--no-index-json | Disables index json mode test-storybook --no-index-json |
-c , --config-dir [dir-name] | Directory where to load Storybook configurations from test-storybook -c .storybook |
--watch | Watch files for changes and rerun tests related to changed files.test-storybook --watch |
--watchAll | Watch files for changes and rerun all tests when something changes.test-storybook --watchAll |
--coverage | Indicates that test coverage information should be collected and reported in the output test-storybook --coverage |
--coverageDirectory | Directory where to write coverage report output test-storybook --coverage --coverageDirectory coverage/ui/storybook |
--url | Define the URL to run tests in. Useful for custom Storybook URLs test-storybook --url http://the-storybook-url-here.com |
--browsers | Define browsers to run tests in. One or multiple of: chromium, firefox, webkit test-storybook --browsers firefox chromium |
--maxWorkers [amount] | Specifies the maximum number of workers the worker-pool will spawn for running tests test-storybook --maxWorkers=2 |
--testTimeout [number] | This option sets the default timeouts of test cases test-storybook --testTimeout=15_000 |
--no-cache | Disable the cache test-storybook --no-cache |
--clearCache | Deletes the Jest cache directory and then exits without running tests test-storybook --clearCache |
--verbose | Display individual test results with the test suite hierarchy test-storybook --verbose |
-u , --updateSnapshot | Use this flag to re-record every snapshot that fails during this test run test-storybook -u |
--eject | Creates a local configuration file to override defaults of the test-runner test-storybook --eject |
--json | Prints the test results in JSON. This mode will send all other test output and user messages to stderr. test-storybook --json |
--outputFile | Write test results to a file when the --json option is also specified. test-storybook --json --outputFile results.json |
--junit | Indicates that test information should be reported in a junit file. test-storybook --**junit** |
--listTests | Lists all test files that will be run, and exitstest-storybook --listTests |
--ci | Instead of the regular behavior of storing a new snapshot automatically, it will fail the test and require Jest to be run with --updateSnapshot . test-storybook --ci |
--shard [shardIndex/shardCount] | Splits your test suite across different machines to run in CI. test-storybook --shard=1/3 |
--failOnConsole | Makes tests fail on browser console errorstest-storybook --failOnConsole |
--includeTags | (experimental) Only test stories that match the specified tags, comma separatedtest-storybook --includeTags="test-only" |
--excludeTags | (experimental) Do not test stories that match the specified tags, comma separatedtest-storybook --excludeTags="broken-story,todo" |
--skipTags | (experimental) Do not test stories that match the specified tags and mark them as skipped in the CLI output, comma separatedtest-storybook --skipTags="design" |
--disable-telemetry | Disable sending telemetry datatest-storybook --disable-telemetry |
The test runner is based on Jest and will accept most of the CLI options that Jest does, like --watch
, --watchAll
, --maxWorkers
, --testTimeout
, etc. It works out of the box, but if you want better control over its configuration, you can eject its configuration by running test-storybook --eject
to create a local test-runner-jest.config.js
file in the root folder of your project. This file will be used by the test runner.
[!Note] The
test-runner-jest.config.js
file can be placed inside of your Storybook config dir as well. If you pass the--config-dir
option, the test-runner will look for the config file there as well.
The configuration file will accept options for two runners:
The test runner uses jest-playwright and you can pass testEnvironmentOptions to further configure it.
The Storybook test runner comes with Jest installed as an internal dependency. You can pass Jest options based on the version of Jest that comes with the test runner.
Test runner version | Jest version |
---|---|
^0.6.2 | ^26.6.3 or ^27.0.0 |
^0.7.0 | ^28.0.0 |
^0.14.0 | ^29.0.0 |
If you're already using a compatible version of Jest, the test runner will use it, instead of installing a duplicate version in your node_modules folder.
Here's an example of an ejected file used to extend the tests timeout from Jest:
1// ./test-runner-jest.config.js 2const { getJestConfig } = require('@storybook/test-runner'); 3 4const testRunnerConfig = getJestConfig(); 5 6/** 7 * @type {import('@jest/types').Config.InitialOptions} 8 */ 9module.exports = { 10 // The default Jest configuration comes from @storybook/test-runner 11 ...testRunnerConfig, 12 /** Add your own overrides below 13 * @see https://jestjs.io/docs/configuration 14 */ 15 testTimeout: 20000, // default timeout is 15s 16};
You might want to skip certain stories in the test-runner, run tests only against a subset of stories, or exclude certain stories entirely from your tests. This is possible via the tags
annotation. By default, the test-runner includes every story with the 'test'
tag. This tag is included by default in Storybook 8 for all stories, unless the user tells otherwise via tag negation.
This annotation can be part of a story, therefore only applying to that story, or the component meta (the default export), which applies to all stories in the file:
1const meta = { 2 component: Button, 3 tags: ['atom'], 4}; 5export default meta; 6 7// will inherit tags from project and meta to be ['dev', 'test', 'atom'] 8export const Primary = {}; 9 10export const Secondary = { 11 // will combine with project and meta tags to be ['dev', 'test', 'atom', 'design'] 12 tags: ['design'], 13}; 14 15export const Tertiary = { 16 // will combine with project and meta tags to be ['dev', 'atom'] 17 tags: ['!test'], 18};
[!Note] You can't import constants from another file and use them to define tags in your stories. The tags in your stories or meta must be defined inline, as an array of strings. This is a restriction due to Storybook's static analysis.
For more information on how tags combine (and can be selectively removed), please see the official docs.
Once your stories have your own custom tags, you can filter them via the tags property in your test-runner configuration file. You can also use the CLI flags --includeTags
, --excludeTags
or --skipTags
for the same purpose. The CLI flags will take precedence over the tags in the test-runner config, therefore overriding them.
Both --skipTags
and --excludeTags
will prevent a story from being tested. The difference is that skipped tests will appear as "skipped" in the cli output, whereas excluded tests will not appear at all. Skipped tests can be useful to indicate tests that are temporarily disabled.
The test runner uses default Jest reporters, but you can add additional reporters by ejecting the configuration as explained above and overriding (or merging with) the reporters
property.
Additionally, if you pass --junit
to test-storybook
, the test runner will add jest-junit
to the reporters list and generate a test report in a JUnit XML format. You can further configure the behavior of jest-junit
by either setting specific JEST_JUNIT_*
environment variables or by defining a jest-junit
field in your package.json with the options you want, which will be respected when generating the report. You can look at all available options here: https://github.com/jest-community/jest-junit#configuration
By default, the test runner assumes that you're running it against a locally served Storybook on port 6006.
If you want to define a target url so it runs against deployed Storybooks, you can do so by passing the TARGET_URL
environment variable:
1TARGET_URL=https://the-storybook-url-here.com yarn test-storybook
Or by using the --url
flag:
1yarn test-storybook --url https://the-storybook-url-here.com
By default, the test runner transforms your story files into tests. It also supports a secondary "index.json mode" which runs directly against your Storybook's index data, which dependending on your Storybook version is located in a stories.json
or index.json
, a static index of all the stories.
This is particularly useful for running against a deployed storybook because index.json
is guaranteed to be in sync with the Storybook you are testing. In the default, story file-based mode, your local story files may be out of sync – or you might not even have access to the source code.
Furthermore, it is not possible to run the test-runner directly against .mdx
stories or custom CSF dialects like when writing Svelte native stories with addon-svelte-csf
. In these cases index.json
mode must be used.
To run in index.json
mode, first make sure your Storybook has a v4 index.json
file. You can find it when navigating to:
https://your-storybook-url-here.com/index.json
It should be a JSON file and the first key should be "v": 4
followed by a key called "entries"
containing a map of story IDs to JSON objects.
In Storybook 7.0, index.json
is enabled by default, unless you are using the storiesOf()
syntax, in which case it is not supported.
On Storybook 6.4 and 6.5, to run in index.json
mode, first make sure your Storybook has a file called stories.json
that has "v": 3
, available at:
https://your-storybook-url-here.com/stories.json
If your Storybook does not have a stories.json
file, you can generate one, provided:
storiesOf
storiesTo enable stories.json
in your Storybook, set the buildStoriesJson
feature flag in .storybook/main.js
:
1// .storybook/main.ts 2const config = { 3 // ... rest of the config 4 features: { buildStoriesJson: true }, 5}; 6export default config;
Once you have a valid stories.json
file, your Storybook will be compatible with the "index.json mode".
By default, the test runner will detect whether your Storybook URL is local or remote, and if it is remote, it will run in "index.json mode" automatically. To disable it, you can pass the --no-index-json
flag:
1yarn test-storybook --no-index-json
If you are running tests against a local Storybook but for some reason want to run in "index.json mode", you can pass the --index-json
flag:
1yarn test-storybook --index-json
[!Note] index.json mode is not compatible with watch mode.
If you want to add the test-runner to CI, there are a couple of ways to do so:
On Github actions, once services like Vercel, Netlify and others do deployment runs, they follow a pattern of emitting a deployment_status
event containing the newly generated URL under deployment_status.target_url
. You can use that URL and set it as TARGET_URL
for the test-runner.
Here's an example of an action to run tests based on that:
1name: Storybook Tests 2on: deployment_status 3jobs: 4 test: 5 timeout-minutes: 60 6 runs-on: ubuntu-latest 7 if: github.event.deployment_status.state == 'success' 8 steps: 9 - uses: actions/checkout@v4 10 - uses: actions/setup-node@v4 11 with: 12 node-version: '18.x' 13 - name: Install dependencies 14 run: yarn 15 - name: Run Storybook tests 16 run: yarn test-storybook 17 env: 18 TARGET_URL: '${{ github.event.deployment_status.target_url }}'
[!Note] If you're running the test-runner against a
TARGET_URL
of a remotely deployed Storybook (e.g. Chromatic), make sure that the URL loads a publicly available Storybook. Does it load correctly when opened in incognito mode on your browser? If your deployed Storybook is private and has authentication layers, the test-runner will hit them and thus not be able to access your stories. If that is the case, use the next option instead.
In order to build and run tests against your Storybook in CI, you might need to use a combination of commands involving the concurrently, http-server and wait-on libraries. Here's a recipe that does the following: Storybook is built and served locally, and once it is ready, the test runner will run against it.
1{ 2 "test-storybook:ci": "concurrently -k -s first -n \"SB,TEST\" -c \"magenta,blue\" \"yarn build-storybook --quiet && npx http-server storybook-static --port 6006 --silent\" \"wait-on tcp:6006 && yarn test-storybook\"" 3}
And then you can essentially run test-storybook:ci
in your CI:
1name: Storybook Tests 2on: push 3jobs: 4 test: 5 timeout-minutes: 60 6 runs-on: ubuntu-latest 7 steps: 8 - uses: actions/checkout@v4 9 - uses: actions/setup-node@v4 10 with: 11 node-version: '18.x' 12 - name: Install dependencies 13 run: yarn 14 - name: Run Storybook tests 15 run: yarn test-storybook:ci
[!Note] Building Storybook locally makes it simple to test Storybooks that could be available remotely, but are under authentication layers. If you also deploy your Storybooks somewhere (e.g. Chromatic, Vercel, etc.), the Storybook URL can still be useful with the test-runner. You can pass it to the
REFERENCE_URL
environment variable when running the test-storybook command, and if a story fails, the test-runner will provide a helpful message with the link to the story in your published Storybook instead.
The test runner supports code coverage with the --coverage
flag or STORYBOOK_COLLECT_COVERAGE
environment variable. The pre-requisite is that your components are instrumented using istanbul.
Instrumenting the code is an important step, which allows lines of code to be tracked by Storybook. This is normally achieved by using instrumentation libraries such as the Istanbul Babel plugin, or its Vite counterpart. In Storybook, you can set up instrumentation in two different ways:
For select frameworks (React, Preact, HTML, Web components, Svelte and Vue) you can use the @storybook/addon-coverage addon, which will automatically configure the plugin for you.
Install @storybook/addon-coverage
:
1yarn add -D @storybook/addon-coverage
And register it in your .storybook/main.js
file:
1// .storybook/main.ts 2const config = { 3 // ...rest of your code here 4 addons: ['@storybook/addon-coverage'], 5}; 6export default config;
The addon has default options that might suffice for your project, and it accepts an options object for project-specific configuration.
If your framework does not use Babel or Vite, such as Angular, you will have to manually configure whatever flavor of Istanbul (Webpack loader, etc.) your project might require. Also, if your project uses Vue or Svelte, you will need to add one extra configuration for nyc.
You can find recipes in this repository that include many different configurations and steps on how to set up coverage in each of them.
After setting up instrumentation, run Storybook then run the test-runner with --coverage
:
1yarn test-storybook --coverage
The test runner will report the results in the CLI and generate a coverage/storybook/coverage-storybook.json
file which can be used by nyc
.
[!Note] If your components are not shown in the report and you're using Vue or Svelte, it's probably because you're missing a .nycrc.json file to specify the file extensions. Use the recipes for reference on how to set that up.
If you want to generate coverage reports with different reporters, you can use nyc
and point it to the folder which contains the Storybook coverage file. nyc
is a dependency of the test runner so you will already have it in your project.
Here's an example generating an lcov
report:
npx nyc report --reporter=lcov -t coverage/storybook --report-dir coverage/storybook
This will generate a more detailed, interactive coverage summary that you can access at coverage/storybook/index.html
file which can be explored and will show the coverage in detail:
The nyc
command will respect nyc configuration files if you have them in your project.
If you want certain parts of your code to be deliberately ignored, you can use istanbul parsing hints.
The test runner reports coverage related to the coverage/storybook/coverage-storybook.json
file. This is by design, showing you the coverage which is tested while running Storybook.
Now, you might have other tests (e.g. unit tests) which are not covered in Storybook but are covered when running tests with Jest, which you might also generate coverage files from, for instance. In such cases, if you are using tools like Codecov to automate reporting, the coverage files will be detected automatically and if there are multiple files in the coverage folder, they will be merged automatically.
Alternatively, in case you want to merge coverages from other tools, you should:
1 - move or copy the coverage/storybook/coverage-storybook.json
into coverage/coverage-storybook.json
;
2 - run nyc report
against the coverage
folder.
Here's an example on how to achieve that:
1{ 2 "scripts": { 3 "test:coverage": "jest --coverage", 4 "test-storybook:coverage": "test-storybook --coverage", 5 "coverage-report": "cp coverage/storybook/coverage-storybook.json coverage/coverage-storybook.json && nyc report --reporter=html -t coverage --report-dir coverage" 6 } 7}
[!Note] If your other tests (e.g. Jest) are using a different coverageProvider than
babel
, you will have issues when merging the coverage files. More info here.
The test-runner collects all coverage in one file coverage/storybook/coverage-storybook.json
. To split the coverage file you should rename it using the shard-index
. To report the coverage you should merge the coverage files with the nyc merge command.
Github CI example:
1test: 2 name: Running Test-storybook (${{ matrix.shard }}) 3 strategy: 4 matrix: 5 shard: [1, 2, 3, 4] 6 steps: 7 - name: Testing storybook 8 run: yarn test-storybook --coverage --shard=${{ matrix.shard }}/${{ strategy.job-total }} 9 - name: Renaming coverage file 10 run: mv coverage/storybook/coverage-storybook.json coverage/storybook/coverage-storybook-${matrix.shard}.json 11report-coverage: 12 name: Reporting storybook coverage 13 steps: 14 - name: Merging coverage 15 run: yarn nyc merge coverage/storybook merged-output/merged-coverage.json 16 - name: Report coverage 17 run: yarn nyc report --reporter=text -t merged-output --report-dir merged-output
Circle CI example:
1test: 2 parallelism: 4 3 steps: 4 - run: 5 command: yarn test-storybook --coverage --shard=$(expr $CIRCLE_NODE_INDEX + 1)/$CIRCLE_NODE_TOTAL 6 command: mv coverage/storybook/coverage-storybook.json coverage/storybook/coverage-storybook-${CIRCLE_NODE_INDEX + 1}.json 7report-coverage: 8 steps: 9 - run: 10 command: yarn nyc merge coverage/storybook merged-output/merged-coverage.json 11 command: yarn nyc report --reporter=text -t merged-output --report-dir merged-output
Gitlab CI example:
1test: 2 parallel: 4 3 script: 4 - yarn test-storybook --coverage --shard=$CI_NODE_INDEX/$CI_NODE_TOTAL 5 - mv coverage/storybook/coverage-storybook.json coverage/storybook/coverage-storybook-${CI_NODE_INDEX}.json 6report-coverage: 7 script: 8 - yarn nyc merge coverage/storybook merged-output/merged-coverage.json 9 - yarn nyc report --reporter=text -t merged-output --report-dir merged-output
The test runner renders a story and executes its play function if one exists. However, there are certain behaviors that are not possible to achieve via the play function, which executes in the browser. For example, if you want the test runner to take visual snapshots for you, this is something that is possible via Playwright/Jest, but must be executed in Node.
To enable use cases like visual or DOM snapshots, the test runner exports test hooks that can be overridden globally. These hooks give you access to the test lifecycle before and after the story is rendered.
There are three hooks: setup
, preVisit
, and postVisit
. setup
executes once before all the tests run. preVisit
and postVisit
execute within a test before and after a story is rendered.
All three functions can be set up in the configuration file .storybook/test-runner.js
which can optionally export any of these functions.
[!Note] The
preVisit
andpostVisit
functions will be executed for all stories.
Async function that executes once before all the tests run. Useful for setting node-related configuration, such as extending Jest global expect
for accessibility matchers.
1// .storybook/test-runner.ts 2import type { TestRunnerConfig } from '@storybook/test-runner'; 3 4const config: TestRunnerConfig = { 5 async setup() { 6 // execute whatever you like, in Node, once before all tests run 7 }, 8}; 9export default config;
[!Note] This hook is deprecated. It has been renamed to
preVisit
, please use it instead.
Async function that receives a Playwright Page and a context object with the current story's id
, title
, and name
.
Executes within a test before the story is rendered. Useful for configuring the Page before the story renders, such as setting up the viewport size.
1// .storybook/test-runner.ts 2import type { TestRunnerConfig } from '@storybook/test-runner'; 3 4const config: TestRunnerConfig = { 5 async preVisit(page, context) { 6 // execute whatever you like, before the story renders 7 }, 8}; 9export default config;
[!Note] This hook is deprecated. It has been renamed to
postVisit
, please use it instead.
Async function that receives a Playwright Page and a context object with the current story's id
, title
, and name
.
Executes within a test after a story is rendered. Useful for asserting things after the story is rendered, such as DOM and image snapshotting.
1// .storybook/test-runner.ts 2import type { TestRunnerConfig } from '@storybook/test-runner'; 3 4const config: TestRunnerConfig = { 5 async postVisit(page, context) { 6 // execute whatever you like, after the story renders 7 }, 8}; 9export default config;
[!Note] Although you have access to Playwright's Page object, in some of these hooks, we encourage you to test as much as possible within the story's play function.
To visualize the test lifecycle with these hooks, consider a simplified version of the test code automatically generated for each story in your Storybook:
1// executed once, before the tests 2await setup(); 3 4it('button--basic', async () => { 5 // filled in with data for the current story 6 const context = { id: 'button--basic', title: 'Button', name: 'Basic' }; 7 8 // playwright page https://playwright.dev/docs/pages 9 await page.goto(STORYBOOK_URL); 10 11 // pre-visit hook 12 if (preVisit) await preVisit(page, context); 13 14 // render the story and watch its play function (if applicable) 15 await page.execute('render', context); 16 17 // post-visit hook 18 if (postVisit) await postVisit(page, context); 19});
These hooks are very useful for a variety of use cases, which are described in the recipes section further below.
Apart from these hooks, there are additional properties you can set in .storybook/test-runner.js
:
The test-runner has a default prepare
function which gets the browser in the right environment before testing the stories. You can override this behavior, in case you might want to hack the behavior of the browser. For example, you might want to set a cookie, or add query parameters to the visiting URL, or do some authentication before reaching the Storybook URL. You can do that by overriding the prepare
function.
The prepare
function receives an object containing:
browserContext
: a Playwright Browser Context instancepage
: a Playwright Page instance.testRunnerConfig
: the test runner configuration object, coming from the .storybook/test-runner.js
.For reference, please use the default prepare
function as a starting point.
[!Note] If you override the default prepare behavior, even though this is powerful, you will be responsible for properly preparing the browser. Future changes to the default prepare function will not get included in your project, so you will have to keep an eye out for changes in upcoming releases.
The test-runner makes a few fetch
calls to check the status of a Storybook instance, and to get the index of the Storybook's stories. Additionally, it visits a page using Playwright. In all of these scenarios, it's possible, depending on where your Storybook is hosted, that you might need to set some HTTP headers. For example, if your Storybook is hosted behind a basic authentication, you might need to set the Authorization
header. You can do so by passing a getHttpHeaders
function to your test-runner config. That function receives the url
of the fetch calls and page visits, and should return an object with the headers to be set.
1// .storybook/test-runner.ts 2import type { TestRunnerConfig } from '@storybook/test-runner'; 3 4const config: TestRunnerConfig = { 5 getHttpHeaders: async (url) => { 6 const token = url.includes('prod') ? 'XYZ' : 'ABC'; 7 return { 8 Authorization: `Bearer ${token}`, 9 }; 10 }, 11}; 12export default config;
The tags
property contains three options: include | exclude | skip
, each accepting an array of strings:
1// .storybook/test-runner.ts 2import type { TestRunnerConfig } from '@storybook/test-runner'; 3 4const config: TestRunnerConfig = { 5 tags: { 6 include: [], // string array, e.g. ['test', 'design'] - by default, the value will be ['test'] 7 exclude: [], // string array, e.g. ['design', 'docs-only'] 8 skip: [], // string array, e.g. ['design'] 9 }, 10}; 11export default config;
tags
are used for filtering your tests. Learn more here.
When tests fail and there were browser logs during the rendering of a story, the test-runner provides the logs alongside the error message. The logLevel
property defines what kind of logs should be displayed:
info
(default): Shows console logs, warnings, and errors.warn
: Shows only warnings and errors.error
: Displays only error messages.verbose
: Includes all console outputs, including debug information and stack traces.none
: Suppresses all log output.1// .storybook/test-runner.ts 2import type { TestRunnerConfig } from '@storybook/test-runner'; 3 4const config: TestRunnerConfig = { 5 logLevel: 'verbose', 6}; 7export default config;
The errorMessageFormatter
property defines a function that will pre-format the error messages before they get reported in the CLI:
1// .storybook/test-runner.ts 2import type { TestRunnerConfig } from '@storybook/test-runner'; 3 4const config: TestRunnerConfig = { 5 errorMessageFormatter: (message) => { 6 // manipulate the error message as you like 7 return message; 8 }, 9}; 10export default config;
For more specific use cases, the test runner provides utility functions that could be useful to you.
While running tests using the hooks, you might want to get information from a story, such as the parameters passed to it, or its args. The test runner now provides a getStoryContext
utility function that fetches the story context for the current story:
Suppose your story looks like this:
1// ./Button.stories.ts 2 3export const Primary = { 4 parameters: { 5 theme: 'dark', 6 }, 7};
You can access its context in a test hook like so:
1// .storybook/test-runner.ts 2import { TestRunnerConfig, getStoryContext } from '@storybook/test-runner'; 3 4const config: TestRunnerConfig = { 5 async postVisit(page, context) { 6 // Get entire context of a story, including parameters, args, argTypes, etc. 7 const storyContext = await getStoryContext(page, context); 8 if (storyContext.parameters.theme === 'dark') { 9 // do something 10 } else { 11 // do something else 12 } 13 }, 14}; 15export default config;
It's useful for skipping or enhancing use cases like image snapshot testing, accessibility testing and more.
The waitForPageReady
utility is useful when you're executing image snapshot testing with the test-runner. It encapsulates a few assertions to make sure the browser has finished downloading assets.
1// .storybook/test-runner.ts 2import { TestRunnerConfig, waitForPageReady } from '@storybook/test-runner'; 3 4const config: TestRunnerConfig = { 5 async postVisit(page, context) { 6 // use the test-runner utility to wait for fonts to load, etc. 7 await waitForPageReady(page); 8 9 // by now, we know that the page is fully loaded 10 }, 11}; 12export default config;
The test-runner adds a StorybookTestRunner
entry to the browser's user agent. You can use it to determine if a story is rendering in the context of the test runner. This might be useful if you want to disable certain features in your stories when running in the test runner, though it's likely an edge case.
1// At the render level, useful for dynamically rendering something based on the test-runner 2export const MyStory = { 3 render: () => { 4 const isTestRunner = window.navigator.userAgent.match(/StorybookTestRunner/); 5 return ( 6 <div> 7 <p>Is this story running in the test runner?</p> 8 <p>{isTestRunner ? 'Yes' : 'No'}</p> 9 </div> 10 ); 11 }, 12};
Given that this check is happening in the browser, it is only applicable in the following scenarios:
Below you will find recipes that use both the hooks and the utility functions to achieve different things with the test-runner.
You can use Playwright's Page viewport utility to programatically change the viewport size of your test. If you use @storybook/addon-viewports, you can reuse its parameters and make sure that the tests match in configuration.
1// .storybook/test-runner.ts 2import { TestRunnerConfig, getStoryContext } from '@storybook/test-runner'; 3import { MINIMAL_VIEWPORTS } from '@storybook/addon-viewport'; 4 5const DEFAULT_VIEWPORT_SIZE = { width: 1280, height: 720 }; 6 7const config: TestRunnerConfig = { 8 async preVisit(page, story) { 9 const context = await getStoryContext(page, story); 10 const viewportName = context.parameters?.viewport?.defaultViewport; 11 const viewportParameter = MINIMAL_VIEWPORTS[viewportName]; 12 13 if (viewportParameter) { 14 const viewportSize = Object.entries(viewportParameter.styles).reduce( 15 (acc, [screen, size]) => ({ 16 ...acc, 17 // make sure your viewport config in Storybook only uses numbers, not percentages 18 [screen]: parseInt(size), 19 }), 20 {} 21 ); 22 23 page.setViewportSize(viewportSize); 24 } else { 25 page.setViewportSize(DEFAULT_VIEWPORT_SIZE); 26 } 27 }, 28}; 29export default config;
In Storybook 9, the accessibility addon has been enhanced with automated reporting capabilities and the Test-runner has out of the box support for it. If you have @storybook/addon-a11y
installed, as long as you enable them via parameters, you will get a11y checks for every story:
1// .storybook/preview.ts 2 3const preview = { 4 parameters: { 5 a11y: { 6 // 'error' will cause a11y violations to fail tests 7 test: 'error', // or 'todo' or 'off' 8 }, 9 }, 10}; 11 12export default preview;
If you had a11y tests set up previously for Storybook 8 (with the recipe below), you can uninstall axe-playwright
and remove all the code from the test-runner hooks, as they are not necessary anymore.
[!TIP] If you upgrade to Storybook 9, there is out of the box support for a11y tests and you don't have to follow a recipe like this.
You can install axe-playwright
and use it in tandem with the test-runner to test the accessibility of your components.
If you use @storybook/addon-a11y
, you can reuse its parameters and make sure that the tests match in configuration, both in the accessibility addon panel and the test-runner.
1// .storybook/test-runner.ts
2import { TestRunnerConfig, getStoryContext } from '@storybook/test-runner';
3import { injectAxe, checkA11y, configureAxe } from 'axe-playwright';
4
5const config: TestRunnerConfig = {
6 async preVisit(page, context) {
7 // Inject Axe utilities in the page before the story renders
8 await injectAxe(page);
9 },
10 async postVisit(page, context) {
11 // Get entire context of a story, including parameters, args, argTypes, etc.
12 const storyContext = await getStoryContext(page, context);
13
14 // Do not test a11y for stories that disable a11y
15 if (storyContext.parameters?.a11y?.disable) {
16 return;
17 }
18
19 // Apply story-level a11y rules
20 await configureAxe(page, {
21 rules: storyContext.parameters?.a11y?.config?.rules,
22 });
23
24 // in Storybook 6.x, the selector is #root
25 await checkA11y(page, '#storybook-root', {
26 detailedReport: true,
27 detailedReportOptions: {
28 html: true,
29 },
30 // pass axe options defined in @storybook/addon-a11y
31 axeOptions: storyContext.parameters?.a11y?.options,
32 });
33 },
34};
35export default config;
You can use Playwright's built in APIs for DOM snapshot testing:
1// .storybook/test-runner.ts 2import type { TestRunnerConfig } from '@storybook/test-runner'; 3 4const config: TestRunnerConfig = { 5 async postVisit(page, context) { 6 // the #storybook-root element wraps the story. In Storybook 6.x, the selector is #root 7 const elementHandler = await page.$('#storybook-root'); 8 const innerHTML = await elementHandler.innerHTML(); 9 expect(innerHTML).toMatchSnapshot(); 10 }, 11}; 12export default config;
When running with --stories-json
, tests get generated in a temporary folder and snapshots get stored alongside. You will need to --eject
and configure a custom snapshotResolver
to store them elsewhere, e.g. in your working directory:
1// ./test-runner-jest.config.js 2const { getJestConfig } = require('@storybook/test-runner'); 3 4const testRunnerConfig = getJestConfig(); 5 6/** 7 * @type {import('@jest/types').Config.InitialOptions} 8 */ 9module.exports = { 10 // The default Jest configuration comes from @storybook/test-runner 11 ...testRunnerConfig, 12 snapshotResolver: './snapshot-resolver.js', 13};
1// ./snapshot-resolver.js 2const path = require('path'); 3 4// 👉 process.env.TEST_ROOT will only be available in --index-json or --stories-json mode. 5// if you run this code without these flags, you will have to override it the test root, else it will break. 6// e.g. process.env.TEST_ROOT = process.cwd() 7 8module.exports = { 9 resolveSnapshotPath: (testPath, snapshotExtension) => 10 path.join(process.cwd(), '__snapshots__', path.basename(testPath) + snapshotExtension), 11 resolveTestPath: (snapshotFilePath, snapshotExtension) => 12 path.join(process.env.TEST_ROOT, path.basename(snapshotFilePath, snapshotExtension)), 13 testPathForConsistencyCheck: path.join(process.env.TEST_ROOT, 'example.test.js'), 14};
Here's a slightly different recipe for image snapshot testing:
1// .storybook/test-runner.ts 2import { TestRunnerConfig, waitForPageReady } from '@storybook/test-runner'; 3import { toMatchImageSnapshot } from 'jest-image-snapshot'; 4 5const customSnapshotsDir = `${process.cwd()}/__snapshots__`; 6 7const config: TestRunnerConfig = { 8 setup() { 9 expect.extend({ toMatchImageSnapshot }); 10 }, 11 async postVisit(page, context) { 12 // use the test-runner utility to wait for fonts to load, etc. 13 await waitForPageReady(page); 14 15 // If you want to take screenshot of multiple browsers, use 16 // page.context().browser().browserType().name() to get the browser name to prefix the file name 17 const image = await page.screenshot(); 18 expect(image).toMatchImageSnapshot({ 19 customSnapshotsDir, 20 customSnapshotIdentifier: context.id, 21 }); 22 }, 23}; 24export default config;
The Storybook test-runner relies on a library called jest-playwright-preset, of which does not seem to support PnP. As a result, the test-runner won't work out of the box with PnP, and you might have the following error:
PlaywrightError: jest-playwright-preset: Cannot find playwright package to use chromium
If that is the case, there are two potential solutions:
playwright
as a direct dependency. You might need to run yarn playwright install
after that, so you install Playwright's browser binaries.node-modules
.The test-runner is web based and therefore won't work with @storybook/react-native
directly. However, if you use the React Native Web Storybook Addon, you can run the test-runner against the web-based Storybook generated with that addon. In that case, things would work the same way.
By default, the test runner truncates error outputs at 1000 characters, and you can check the full output directly in Storybook, in the browser. If you do want to change that limit, however, you can do so by setting the DEBUG_PRINT_LIMIT
environment variable to a number of your choosing, for example, DEBUG_PRINT_LIMIT=5000 yarn test-storybook
.
If your tests are timing out with Timeout - Async callback was not invoked within the 15000 ms timeout specified by jest.setTimeout
, it might be that playwright couldn't handle to test the amount of stories you have in your project. Maybe you have a large amount of stories or your CI has a really low RAM configuration.
In either way, to fix it you should limit the amount of workers that run in parallel by passing the --maxWorkers option to your command:
1{ 2 "test-storybook:ci": "concurrently -k -s first -n \"SB,TEST\" -c \"magenta,blue\" \"yarn build-storybook --quiet && npx http-server storybook-static --port 6006 --silent\" \"wait-on tcp:6006 && yarn test-storybook --maxWorkers=2\"" 3}
Another option is trying to increase the test timeout by passing the --testTimeout option to your command (adding --testTimeout=60_000
will increase test timeouts to 1 minute):
1"test-storybook:ci": "concurrently -k -s first -n \"SB,TEST\" -c \"magenta,blue\" \"yarn build-storybook --quiet && npx http-server storybook-static --port 6006 --silent\" \"wait-on tcp:6006 && yarn test-storybook --maxWorkers=2 --testTimeout=60_000\""
There is currently a bug in Jest which means tests cannot be on a separate drive than the project. To work around this you will need to set the TEMP
environment variable to a temporary folder on the same drive as your project. Here's what that would look like on GitHub Actions:
1env: 2 # Workaround for https://github.com/facebook/jest/issues/8536 3 TEMP: ${{ runner.temp }}
As the test runner is based on playwright, depending on your CI setup you might need to use specific docker images or other configuration. In that case, you can refer to the Playwright CI docs for more information.
After merging test coverage reports coming from the test runner with reports from other tools (e.g. Jest), if the end result is not what you expected. Here's why:
The test runner uses babel
as coverage provider, which behaves in a certain way when evaluating code coverage. If your other reports happen to use a different coverage provider than babel
, such as v8
, they will evaluate the coverage differently. Once merged, the results will likely be wrong.
Example: in v8
, import and export lines are counted as coverable pieces of code, however in babel
, they are not. This impacts the percentage of coverage calculation.
While the test runner does not provide v8
as an option for coverage provider, it is recommended that you set your application's Jest config to use coverageProvider: 'babel'
if you can, so that the reports line up as expected and get merged correctly.
For more context, here's some explanation why v8
is not a 1:1 replacement for Babel/Istanbul coverage.
Future plans involve adding support for the following features:
We welcome contributions to the test runner! Please see our Contributing Guide for details on how to get started, our development workflow, and release process.
No vulnerabilities found.
No security vulnerabilities found.