Gathering detailed insights and metrics for nightwatch-axe-verbose
Gathering detailed insights and metrics for nightwatch-axe-verbose
Gathering detailed insights and metrics for nightwatch-axe-verbose
Gathering detailed insights and metrics for nightwatch-axe-verbose
npm install nightwatch-axe-verbose
Module System
Min. Node Version
Typescript Support
Node Version
NPM Version
6 Stars
88 Commits
8 Forks
2 Watching
4 Branches
7 Contributors
Updated on 29 Jun 2024
JavaScript (100%)
Cumulative downloads
Total Downloads
Last day
20.9%
22,158
Compared to previous day
Last week
31.8%
103,531
Compared to previous week
Last month
17.2%
405,470
Compared to previous month
Last year
34.7%
4,086,946
Compared to previous year
Verbose error reporting for axe accessibility rule violations to use in NightwatchJS
This fork of nightwatch-axe is more verbose in that it will report each passing rule run and how many elements it was run against. In addition, each rule failure will be counted individually against each failing element so downstream failures are not hidden.
Nightwatch.js custom commands for aXe allowing Nightwatch to be used as an automated accessibility testing tool.
NOTE: If you are using Nightwatch 2.3.6 or greater this is now pre-included as a default plugin in the Nightwatch installation so you can skip this install section
1npm install nightwatch-axe-verbose --save-dev
In nightwatch.conf.js add nightwatch-axe-verbose
to the plugins property array.
1{ 2 // See https://nightwatchjs.org/guide/extending-nightwatch/adding-plugins.html 3 plugins: ['nightwatch-axe-verbose']; 4}
If you are using an older Nightwatch version or prefer the prior custom_commands_path option that will still work instead, but the path has changed from src/commands
to nightwatch/commands
as shown below. You must update the path or use the plugin pattern above starting with v2 of nightwatch-axe-verbose.
1"custom_commands_path": ["./node_modules/nightwatch-axe-verbose/nightwatch/commands"]
Injects the axe-core js library into your test page
Analyzes the current page against applied axe rules
AxeRun takes as a first parameter the selector of the element you want to run the axe test against. If you do it on a larger containing element such as the body all the inner elements will be scanned.
1module.exports = { 2 '@tags': ['accessibility'], 3 'ensure site is accessible': function (browser) { 4 browser 5 .url('https://dequeuniversity.com/demo/mars/') 6 .axeInject() 7 .axeRun('body', { 8 rules: {'color-contrast': { enabled: false }} 9 }) 10 .end(); 11 }
Passes
1√ Passed [ok]: aXe rule: aria-hidden-body (1 elements checked) 2√ Passed [ok]: aXe rule: color-contrast (62 elements checked) 3√ Passed [ok]: aXe rule: duplicate-id-aria (1 elements checked)
Failures
1× Failed [fail]: (aXe rule: button-name - Buttons must have discernible text 2 In element: .departure-date > .ui-datepicker-trigger:nth-child(4)) 3 4× Failed [fail]: (aXe rule: color-contrast - Elements must have sufficient color contrast 5 In element: a[href="mars2\.html\?a\=be_bold"] > h3) 6
If no parameter inputs are supplied to .axeRun() it will default to the html
context (scan all elements on the page) and run with all the default rule options from axe.
axeRun can read the selector context and/or run options from the Nightwatch globals collection so that they don't need to be passed in during each test if you have a globally applicable customized non-default scanning preference. These settings are expected under axeSettings
and can contain context
and/or options
properties containing axe-core context and option settings respectively.
If a selector context is passed in to axeRun by the test it will override, take precedence over, the global setting. Global option properties not supplied by the test will be merged together with the ones provides by the test. The test-supplied value will be used in the case of same-named properties.
1// nightwatch.conf.js 2test_settings: { 3 default: { 4 globals: { 5 axeSettings: { 6 context: 'html', 7 } 8 } 9 } 10}
The above example sets these on the default global configuration. If you set these in the non-default test settings you can have multiple different axeSettings per environment configuration if you prefer.
Given this example global configuration one could expect the following.
.axeRun()
➡️ Run against all page elements except for ones with or inside the .ad-banner class applied. Use all rules except color-contrast.
.axeRun('body')
➡️ Run against all page elements contained inside the body element. Use all rules except color-contrast.
1.axeRun('body', { 2 rules: { 3 'nested-interactive': { 4 enabled: false, 5 }, 6 'select-name': { 7 enabled: true, 8 }, 9 }, 10})
➡️ Run against all page elements contained inside the body element. Use all rules except nested-interactive. Note, this overrides the global setting against color-contrast since both provide a value for rules
and the one supplied from the test takes precedence.
.axeRun('body', { otherValidAxeSetting: { something: true }})
➡️ Run against all page elements contained inside the body element. Run with the rules
settings supplied from the global configuration (disable color contrast), since the rules
setting was not supplied by the test, and additionally include/use otherValidAxeSetting
supplied by the test.
No vulnerabilities found.
Reason
no dangerous workflow patterns detected
Reason
no binaries found in the repo
Reason
license file detected
Details
Reason
2 existing vulnerabilities detected
Details
Reason
Found 3/19 approved changesets -- score normalized to 1
Reason
0 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 0
Reason
dependency not pinned by hash detected -- score normalized to 0
Details
Reason
detected GitHub workflow tokens with excessive permissions
Details
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
security policy file not detected
Details
Reason
project is not fuzzed
Details
Reason
branch protection not enabled on development/release branches
Details
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
Score
Last Scanned on 2024-11-18
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More