Real-Time Remote Usability Testing with Screen Reader Users, Part 1: Practical Overview
The following post is the first in Caitlin Geier’s 2-part series, “Tips and Tricks for Real-Time Remote Usability Testing with Screen Readers.” The first post (Real-Time Remote Usability Testing with Screen Readers, Part 1: Practical Overview) serves as an introduction to usability testing and outlines the advantages and disadvantages of testing remotely, along with tips and important points for consideration. The second post (Real-Time Remote Usability Testing with Screen Reader Users, Part 2: Tips & Tricks) will cover how to perform remote usability tests with helpful advice and how-to hacks.
If your organization is at all interested in user-centered or user experience design, then you likely invest in usability testing your application with key members of your user base. Guided usability testing can help you test your assumptions about what a user would do. For UX teams, testing with real users can also help uncover unforeseen pain points in the application and can give teams ammunition for making usability and accessibility improvements a priority.
If your application has a decent-sized user base (whether it be actual or expected), it’s quite likely that at least a portion of your user base will have some kind of disability. Like with general usability testing, testing applications with users with disabilities can help you understand what your users’ needs actually are, and how they go about achieving their goals.
User Testing at Deque
At Deque, users with disabilities make up between 15 and 20% of the expected user base for our applications. In particular, a number of people who use our applications use screen readers as their primary means of accessing the web and other applications. The user base for Deque’s software is also relatively small, very specific, and is spread throughout the world. As a result, it’s generally not cost effective to do in-person testing. Remote testing services like UserTesting.com are also not particularly useful because our user base is so specific.
As such, the vast majority of the usability testing I do is done remotely in real time. I typically use a few different types of virtual meeting software, as well as software to record sessions on my screen. Testing remotely is relatively painless with most users, but a few extra things need to be taken into account when remotely testing with screen reader users.
A Brief Introduction to Screen Readers
For those of you unfamiliar with screen readers and how they work, here’s a quick primer: a person who is blind will not be able to visually scan through a page to see what’s on it, to read text on it, or to locate buttons or links. Instead, they use a piece of software called a screen reader, which changes text into speech in order to read out the text on the page. Screen readers are generally controlled using the keyboard.
By default, a screen reader reads out all of the text on the page from top to bottom. Various keyboard controls allow screen reader users to navigate by other page elements, like headings or links. Anything that is represented as text on the page will be read out, unless it’s purposely hidden in the code.
In general, a good user experience for a screen reader user will involve not only reading out the text on the page, but also providing appropriate context. Screen readers interpret standard HTML elements by not only reading out the text associated with the element, but also describing the element itself. If a screen reader user comes across a link titled “Search on Google,” the screen reader will read out “Search on Google, link.” A button that says “Submit” will be read out as “Submit, button.” Additional attributes, such as alt text for images and ARIA (Accessible Rich Internet Applications) can also be added to the code of the page to give more context for screen reader users.
An experienced screen reader user can navigate through most pages as quickly as any sighted user. However, new pages, pages with a lot of content, or badly organized pages will often take more time and effort to explore.
Considerations for Testing Remotely
As with all usability testing, there are specific advantages and disadvantages to testing remotely with screen reader users. In my case, testing remotely is pretty much a must, because the user base I test with is small and scattered enough that testing in person isn’t cost effective. If I do test with users in person, I tend to travel to them rather than the other way around.
Larger companies with more established user research teams (think Microsoft or IBM) may have usability labs which they invite users to in order to do task-based usability tests. Some companies send their researchers to people’s homes or workplaces to interact with the user in their own environment. In general, I prefer talking to users in their own environments because it more accurately simulates what the user would normally do. Also, many users with disabilities have a harder time travelling due to limited accessible transportation options in their area, so it’s easier on them if you’re the one doing the travelling. Obviously, it’s easiest if neither party has to travel at all.
If you’re testing web-based interfaces, it’s generally a given that your target user will have their own computer and (in many countries) a decent internet connection. Doing remote testing allows users to interact with you and with your application from the comfort of their own home or office, and using their own technology. This flexibility is particularly useful when testing with users with disabilities because they have the freedom to use their own familiar software and hardware configurations to access your application.
There are several different screen readers out there, and a variety of other assistive technologies that your users may have spent time adjusting to suit their needs. Attempting to recreate their environments on your own computer (or worse, forcing them to use assistive technology they aren’t familiar with) can introduce a lot of variables in testing that may skew results. Assistive technology is also often expensive, and can be a drain on your resources.
In general, testing remotely with screen reader users is more convenient and more cost-effective because there is no travelling involved. Additionally, it allows users to use their own computers and their own assistive technology in order to test your application.
Of course, there are always disadvantages to testing remotely. When testing with screen reader users, it can sometimes be helpful for you to hear what their screen reader is saying as they’re going through the site. While it is technically possible for users to project what their screen reader is saying through their microphone so you can hear it, it’s also unreliable and can make running the test more difficult for you. Many experienced screen reader users set the screen reader speed very high, to the point at which those not used to screen readers would have significant trouble understanding them. Asking users to slow down their screen readers so you can understand will force them to go through the application more slowly. Typically, if you want to hear what the screen reader is saying, it’s much easier to do so when testing in person.
As with all usability testing, it’s also typically better to be in the same room with the user you’re testing with so that you can observe not only what they do, but also their facial expressions and body language. Some of this can be captured by asking users to turn on their webcam, which most video conferencing software will allow in addition to sharing their screen. That said, there’s a lot of video conferencing software that isn’t completely accessible to screen reader users, and they might not be able to access that functionality. I don’t bother asking users to turn on their webcams for the sake of simplicity.
Additionally, if your application isn’t available on a publically accessible web server, you may not be able to test remotely at all. With sighted users, it’s possible to get around this drawback by using video conferencing software (like Webex or GoToMeeting) which allow you to give control of your mouse and keyboard to your user. This may technically work for screen reader users, but only if you have a screen reader both running on your computer and projecting through your microphone. In my experience, the lag for the user makes it difficult (at best) for them to properly interact with your site.
An alternative option which can allow screen reader users to access your computer is to use remote access functionality in the screen reader itself. NVDA Remote and JAWS Tandem are options or add-ons for the two most popular Windows-based screen readers that allow a user with the same screen reader as you to remotely access your computer through the screen reader, and vice versa. Both users must have the same version of the screen reader installed on their computers for it to work. NVDA and the remote add-on are free, but JAWS is very costly. There is also no similar option for Mac OS, to my knowledge. I have not tried this technique yet, but I intend to experiment with NVDA Remote soon.
Other Things to Keep in Mind
Since screen readers rely on code to tell the user what’s on the page, low to medium fidelity prototypes are generally much harder to test with screen readers users. If you’re working from wireframes or mockups, one technique you can try is to describe each screen to the user and ask them what they think, and where they would go next. This can be useful, because it gives you the opportunity to ask screen reader users what they would expect a button to say, or how they expect the page to be organized.
If your mockups contain a lot of content, though, it may not be worth it. Screen reader users will also not be able to go through any of the content in the wireframes or mockups at their own pace or in the order that they wish. They will be dependent upon you to convey the most significant details to them in a reasonable order.
Stay tuned next week when I’ll go over practical tips for remote usability testing with screen readers, in part 2 of this series.
Caitlin Geier is a UX designer on Deque’s product team. As a UX designer, Caitlin’s work with accessible design flourished once she began working for Deque. She is passionate about understanding the users she’s designing for, and she continually strives to incorporate accessibility elements into her work in order to ensure that all users can benefit from inclusive design.