The core charter of digital accessibility testing professionals around the world is to help ensure that digital assets are accessible to all, including people with disabilities.
There are two common success metrics these pros use:
Developed by the W3C, WCAG Success Criteria exist to provide guidance to us all, helping define what accessible conditions should look like.
Based on 20 years of industry experience and thousands of client engagements, we believe that to truly make an immediate and sustainable long-term impact on your state of accessibility, the best method of measurement is total issues addressed in order of severity or impact. This does not negate the need for compliance tracking in any way, but it better enables organizations to move the needle, and build a user-experience focused culture.
In order to “do the most good” without disrupting existing processes, a combination of automated and manual testing procedures has become standard practice (excluding those who unknowingly purchase overlay tools). However, the amount of testing that can cover the issues found by WCAG is debated within the accessibility community. To help remove a stigma attached to automated testing,
…it is our intention to disprove the widely accepted belief that automated accessibility testing only provides 20 to 30% of accessibility testing coverage.
This statistic is founded on an inaccurate definition that accessibility coverage is calculated by how many individual WCAG success criteria can be tested by automation. As a result, organizations new to digital accessibility are discouraged by the perceived value of automated testing, driving many of them to overlay tools or unsustainable manual efforts.
In this report, we’ll analyze and present how real audit data reveals a higher accessibility coverage for automated testing.
1 https://accessibility.blog.gov.uk/2017/02/24/what-we-found-when-we-tested-tools-on-the-worlds-least-accessible-webpage/
2 https://engineering.linkedin.com/blog/2020/automated-accessibility-testing/
Accessibility audit data sample
We compiled anonymized audit data from a large number of companies across various industries and geographies, spanning 13,000+ pages/page states, and nearly 300,000 issues. In an effort to provide an accurate representation of this audit data, this study concentrated on first-time audits, i.e. if a page/page state was tested multiple times during the study period, that page/page state was only counted once, and only issues from its first accessibility audit were included. This removes any unintended biases introduced by varying remediation priorities and schedules.
13,000+
Pages/page states
300,000
Issues
0
False positives
1st
First-time audits only
The automated testing in this data set was done using the popular open-source axe-core rules library. It is important to note that axe-core puts great emphasis on not reporting “false-positives” or erroneous issues that may in fact not be issues at all. This study focused on HTML pages only and spans across various conformance standards like WCAG 2.0/2.1 Level A and AA.
If you’d like to look more into how we mapped coverage for both our automated and Intelligent Guided Testing Tools, you may dive into more detail in the Appendix of this paper.
In the report below, we will discuss what is accessibility testing coverage, how much digital accessibility can be covered by automation alone, how much coverage, and the impact of testing accuracy.
57.38%
Of total issues during automated tests
What is accessibility testing coverage?
The state of the market today
How much coverage is provided by automated accessibility testing tools available today? Depending on who you are talking to, the answer to this question usually varies anywhere between 20 and 30 percent (again, assuming you throw out silly overlay claims). Many people in the industry today define coverage as the percentage of individual WCAG Success Criteria that can be tested using automated accessibility tools. The remaining coverage required to achieve compliance is achieved with manual testing.
Why is coverage important?
Today’s agile development practices rely on automation to achieve maximum throughput for the product development teams. Digital accessibility is sometimes looked at as a non-functional requirement and is often deprioritized to meet business’ critical ‘functional’ requirements. Development and QA managers need to budget and plan for resources ahead of time. The needto forecast how much work can be handled by automation, and how many manual resources will be needed to meet the product deliverables, timelines and budget.
It is often with this intent that the question of coverage is asked. The higher the number of issues that can be caught and addressed in earlier stages of product development, the lower the overall cost. Moreover, automated tools with high ‘coverage’ reduce the reliance on specialized skills and make it possible to ‘mainstream’ the development of an accessible product.
Accessibility coverage: WCAG criteria vs. individual issues
Looking at the percentage of WCAG success criteria is certainly one way to think about the ‘coverage.’ In our analysis we found automated issues for 16 out of the 50 Success Criteria under WCAG 2.1 Level AA. This supports the 20 to 30% automated coverage claims that many experts claim today. However, our analysis indicates that this definition does not accurately reflect the number of issues found in testing real web pages as they exist in the wild. In practice, some types of issues occur much more frequently than others, and these can result in a much higher percentage of total accessibility issues that can be discovered using automated tools.
In our studies, we looked at over 2,000 audits that were conducted using Deque’s automated testing tools and manual testing methodology. In the majority of the audits, we discovered that the number of issues found using automated tests formed a higher percentage of issues as compared to manual issues.
In the majority of the audits, we discovered that the number of issues found using automated tests formed a higher percentage of issues as compared to manual issues.
We believe that the number of issues is a much better indicator of the level of effort required to address accessibility issues. We find that the volume of issues impacts the effort to address issues much more than the type of issue in most instances. For example, consider a web page with 10 missing field label associations. While it is one WCAG criteria, a developer (in most cases) has to address these issues one issue at a time. Therefore, the effort required to address the 10 missing field label associations, while may not be 10X the effort to fix one, is certainly much higher than the effort required to fix one missing field-label association.
Some key findings from our analysis
57.38%
Of total issues identified by Deque’s tests
On average across all the audits included in the sample data, we found that 57.38% of total issues were identified using Deque’s automated tests.
78%
Of issues map to 5 Success Criteria
The top 5 issues categories (WCAG Success Criteria) accounted for over 78% of the total issues discovered, and a majority of these issues were discovered using automated testing.
Top 7
WCAG success criteria with the highest proportion of automated issues were (refer Table 2 in Appendix):
It is worth noting that in the data we analyzed, these seven categories accounted for over 80% of total issues recorded, with 1.4.3 Contrast (Minimum) accounting for about 30%.
- 3.1.1 Language of Page
- 4.1.1 Parsing
- 1.4.3 Contrast (Minimum)
- 2.4.1 Bypass Blocks
- 1.1.1 Non-Text Content
- 4.1.2 Name, Role, Value
- 1.3.1 Info and Relationships
We believe that the number of issues is a much better indicator of the level of effort required to address accessibility issues. We find that the volume of issues impacts the effort to address issues much more than the type of issue in most instances.
| # | Success Criteria # | Success Criteria Name | Total Issues | Manual Issues | Auto Issues | Manual % | Auto % | % of ALL issues on SC | Cumulative % of Issues |
|---|---|---|---|---|---|---|---|---|---|
| 2 | 4.1.2 | Name, Role, Value | 88,714 | 14,981 | 73,733 | 16.89% | 83.11% | 30.08% | 30.08% |
| 3 | 1.3.1 | Info and Relationships | 36,382 | 19,950 | 16,432 | 54.83% | 45.17% | 12.33% | 58.78% |
| 3 | 4.1.1 | Parsing | 34,488 | 3,351 | 31,137 | 9.72% | 90.28% | 11.69% | 70.47% |
| 3 | 1.1.1 | Non-text Content | 23,701 | 7,687 | 16,014 | 32.43% | 67.57% | 8.04% | 78.51% |
| 3 | 2.4.3 | Focus Order | 9,553 | 9,553 | 0 | 100.00% | 0.00% | 3.24% | 81.75% |
| 3 | 2.1.1 | Keyboard | 9,412 | 9,178 | 234 | 97.51% | 2.49% | 3.19% | 84.94% |
| 3 | 2.4.7 | Focus Visible | 7,312 | 7,312 | 0 | 100.00% | 0.00% | 87.42% | 2.48% |
| 3 | 1.4.11 | Non-text Contrast | 4,539 | 4,539 | 0 | 100.00% | 0.00% | 1.54% | 88.96% |
| 3 | 1.4.1 | Use of Color | 3,713 | 3,261 | 452 | 87.83% | 12.17% | 1.26% | 90.22% |
| 3 | 1.3.2 | Meaningful Sequence | 3,313 | 3,313 | 0 | 100.00% | 0.00% | 1.12% | 91.34% |
| 3 | 3.3.2 | Labels of Instructions | 2,537 | 2,019 | 518 | 79.58% | 20.42% | 0.86% | 92.20% |
| 3 | 2.4.1 | Bypass Blocks | 2,533 | 532 | 2,001 | 21.00% | 79.00% | 0.86% | 93.06% |
| 3 | 2.4.2 | Page Titled | 2,211 | 1,962 | 249 | 88.74% | 11.26% | 0.75% | 93.81% |
| 3 | 3.1.1 | Language of Page | 2,173 | 178 | 1,995 | 8.19% | 91.81% | 0.74% | 94.54% |
| 3 | #.#.# | Rest of WCAG 2.1 A/AA SC | 16,090 | 15,889 | 201 | 98.75% | 1.25% | 5.46% | 100.00% |
| Totals | 294,958 | 125,716 | 169,242 | 42.62% | 57.38% |
How much of digital accessibility can really be automated?
Automated accessibility testing is when a rules engine, such as axe-core, scans, or analyzes a web page for accessibility issues. These rules engines are built to test against accessibility standards, such as WCAG, which have predefined criteria for whether or not something is accessible. Automated testing tools can either be browser extensions, like axe DevTools, or they can be rules engines built into automated test environments.
As previously mentioned, we analyzed 13,000+ pages/page states, and nearly 300,000 issues and found that 57.38% of issues from first-time audit customers could be found from automated testing. Each data set will have a unique coverage percentage based on the number of issues that occur. We are confident in the accuracy of the coverage percentage from this data set, as it’s from a large sample size and from a wide variety of first-time customers.
57.38%
of issues from first-time audit customers could be found from automated testing.
The impact of testing accuracy
Not all accessibility tools are created equal
The accuracy of accessibility tools depends on the collaboration of developers and the accessibility experts who create them.
When Deque reports issues using our axe-core powered tools, we exclude false positives. This means that any issues we cannot state are in fact issues with 100% certainty are not reported as such. False positives can waste time, erode trust and derail progress. Additionally, if a flagged item needs manual verification, or is a best practice, it is not included in the reported issues. This exclusion, while it reduces the total number, is important to ensure that we do not inflate the coverage percentage. This also helps us stay true to the initially stated intent of coverage to provide estimate, planning, and forecasting capabilities.
Repeat issues
Modern web pages very often include templates (like header, footer, navigation, etc.) repeated across multiple pages. Any accessibility issues present on these templates can most likely be fixed once and bring benefits to all the pages where they are included. Therefore, we account for issues on these common templates only once for our analysis.
For example, if a header had 8 issues that were repeated across 10 pages, instead of counting these as 80 issues our analysis includes only 8 issues. While this may not be an accurate representation of user experience on these 10 pages, it aligns more closely to effort required to fix the issues on the header. Counting all 80 issues will actually lead to an increase in the overall percentage of issues discovered.
In summary
Accessibility coverage should not be generically defined by the number of WCAG Success Criteria that are covered, but by the volume of issues that can be covered in real-life examples. Our large sample size that covers a wide range of first-time audits provides us an accurate estimation of how much issue coverage to expect from automated and semi-automated accessibility tools.
This new coverage percentage of 57.38% for automated testing will give dev teams and
accessibility experts a more accurate depiction of the value they’ll receive from using automated tools.
If paired with an appropriate semi-automated testing approach, like the Intelligent Guided Tests offered in axe DevTools, this coverage can be increased even further.
As we all continue to make the web a better, more inclusive place, it is important to consider the role automation can have in helping us move the needle. By reconsidering how big an impact it can really make by accurately communicating the coverage it offers, you’ll help remove any doubt from newcomers, helping put them on a path toward sustainable digital accessibility.
*Axe and Intelligent Guided Testing are trademarks of Deque Systems, Inc.
Appendix
Automated Accessibility Data
Table 3: Issue Counts by Success Criteria, summarized by Automated, Manual, and Total Issues
| # | Success Criteria | Automated Issues | Manual Issues | Total Issues |
|---|---|---|---|---|
| 1 | 1.1.1 Non Text Content | 16,014 | 7,687 | 23,701 |
| 2 | 1.2.1 Audio-only and Video-only (Prerecorded) | N/A | 140 | 140 |
Table 4: Percentage of issues by WCAG Success Criteria, sorted by decreasing % Automated in Category
| # | Success Criteria | % Automated in Category | % Automated of Total | % of Total Issues |
|---|---|---|---|---|
| 1 | 3.1.1 Language of Page | 91.81% | 0.68% | 0.74% |
| 2 | 4.1.1 Parsing | 90.28% | 10.56% | 11.69% |
Semi-Automated Intelligent Guided Testing Data
Table 5: Number of Issues by WCAG Success Criteria with coverage provided by IGT
| # | Success Criteria | IGT Coverage | Total Issues |
|---|---|---|---|
| 1 | 1.1.1 Non-Text Content | Complete | 23,458 |
| 2 | 1.2.1 Audio-only and Video-only (Prerecorded) | [4][5] Partial | 111 |
Partial coverage: Implies that the rules in IGT do not cover all possible scenarios for these success criteria. A percentage of issues (depending on the page content) in the Total Issues column could have been discovered with IGT. Table 6 shows the sensitivity of total issues discovered with Partial coverage.
Table 6: Sensitivity of Total Issues Count to Percent Coverage for criteria partially covered by IGT.
| # | Success Criteria | % Automated in Category | % Automated of Total | % of Total Issues |
|---|---|---|---|---|
| 1 | 3.1.1 Language of Page | 91.81% | 23,458 | 0.74% |
| 2 | 4.1.1 Parsing | 90.28% | 10.56% | 11.69% |
Access the PDF version of this report at: deque.com/coverage-report/.
Wondering how Deque claims to achieve catching 80+% of accessibility issues by volume? Read the Semi-Automated Testing Coverage Report. (PDF)