- We are really excited to be talking about removing bias with Wizard of Oz Screen Reader Usability Testing. My name is Annabel Weiner. I'm an Inclusive Design Specialist at Ally and my pronouns are She/Her. And I'm presenting with Courtney and Tim. Courtney if you wanna introduce yourself. - Thanks Annabel, I'm Courtney Benjamin and I am a Testing Process Analyst at Ally on the Accessibility Team. I also use She/Her pronouns. And Tim, if you wanna introduce yourself. - Sure, my name is Tim Harshbarger. I'm an Accessibility Consultant here at DQ. And also, I guess, since it's pertinent to this particular discussion, I am totally blind. - Awesome. Thanks Tim. So to get started, the problem that we were looking to solve is how can we test screen reader usability before a site is developed? In the past we felt like we needed to have an HTML site to test with screen reader users, because screen readers read HTML. But then if we waited until this stage, we were leaving screen reader users out of giving their feedback earlier on in the process, during that early design phase. And it's a lot harder to change things once they're in code and it's live, because it's more expensive and it takes more time. And often if we wait until the end of the development process to make these changes, there are fewer solutions available to us, but there are lots of options for solutions when you're at the beginning of a project and you haven't spent time building and designing it yet. So we wanted to find a way to get feedback from people who are blind when we were still in that early design phase. Something that's really important to our team is that accessibility conformance does not guarantee a good user experience, so we wanna make sure that in addition to passing the web content accessibility guidelines, we were reaching past compliance and giving a really great user experience to everyone who uses our products. And to do this we need to make sure that we're testing our designs with people with disabilities. So we wanted to try doing a Wizard of Oz test, but adapt it for screen reader users. And what is a Wizard of Oz test? This is a definition from Answerlab.com and it says, "Wizard of Oz is a method where participants interact with the system that they believe to be autonomous, but in reality, it is controlled by an unseen human operator in the next room. It's a fantastic way to explore the experience of a complex, responsive system before committing resources and development time to actually build that system." So it's a method that we've used at Ally for flows, like voice assistance and chats, but I got the inspiration for this idea from a talk I heard at Axe-con last year by Christine Hemphill and Tom Pokinko, where they mentioned conducting Wizard of Oz studies as early on as when you have a paper prototype. And I thought this was a really great idea and solution to some of the issues we were facing with testing with screen reader users early on, because this way we could test the information architecture, the basic content organization, and the functionality of a website, all using our voices. So we could have participants who are screen reader users, voice their commands. So we could ask them to say like, "Read forward, read backward, go to the next heading, go to the next link." And then we could have someone on our team act as the screen reader and provide responses that a screen reader would give, but using their voice. So I'm gonna play an audio clip from one of these studies. You're gonna hear two different voices. And the first voice you'll hear is the participant or screen reader user. And then the second voice you'll hear is the wizard or our own man behind the curtain who is acting as the screen reader and responding to feedback from the user. - Okay. Start screen reader. - Ally logo, go to snapshot, link. - Stop, stop the screen reader speaking. I would then move to next heading. - Heading level one, Snapshot. - Move to the next heading. - Heading level two, Investment accounts. - So hopefully that gives you an idea of what the test actually sounds like in practice. But taking a step back, I wanna talk about how we set the scene for participants. So we wanted to give participants some background about why we were doing these tests and why there was a human acting as the screen reader, so we told them that we have a partially functioning prototype. It's not a real site that's built out yet, but something that's in their early design phase. So we wanted to let them know, why we were doing this and what to expect. And making sure they knew that this was not a real screen reader, that they'll be getting feedback from a person. We also gave them instructions about how to use the prototype, so giving examples of read forward, you could say "enter" or "click this link" or "jump to the next heading, jump to the next button." If we were testing a mobile prototype we gave them examples of swipe forward, double tap to select, but letting them know they can kind of use any commands that feel comfortable to them. And then letting them know that they were gonna hear feedback from the screen reader or the human as they navigate through. We also asked if there were any common hot keys or gestures that they used frequently so that we could be prepared to replicate these if they asked for them during the test. We also wanted participants to know that they could ask us to repeat anything as often as they'd like. So sometimes we might repeat the task for them if they forget as they start navigating through, or if they wanna hear a line from the screen reader again, they could ask us to repeat that as many times as needed. And lastly, we wanted to acknowledge that we understand this might feel a little awkward, because screen readers are really customized to people's personal preferences and settings. And unfortunately we can't replicate all of those custom settings here. Taking another step back and looking at how we build the tests. So for the first set of tasks, Courtney and I worked together where Courtney would act as the screen reader or wizard and I was the moderator. And Courtney's the screen reader expert, so she was able to do a really great job replicating the screen reader language, and also anticipating what kinds of commands participants might ask for. To start off with documentation, I created a Word doc with common screen reader navigation patterns. So I have a screenshot of the Word doc on this slide. And we know that screen reader users often hop to different heading levels, different landmarks, different links. And so we wanted to list all of these out so we could hop to them if requested. And then as we started practicing we found it was easier for the wizard and moderator if the language was actually documented visually on the comps or wire frames, so that Courtney or the wizard could see the visual prototype and the screen reader annotations at the same time. So I have a very basic desktop wire frame screenshot on screen, where I marked off accessibility annotation, such as the navigation landmark, the main landmark, different heading levels. So we have a heading level two, a heading level one, and also marking off different roles. So we have link rolls, button rolls. Sometimes we would mark off if we were using a radio button and sometimes we would use arrows to document the reading order if it might be confusing. So we wanted to make sure that the wizard had all of the information they needed, so they could voice the name, role and value for each component on the page. And these accessibility annotations, which are in red capital letters, they really helped guide the wizard during the test so that they could see the visual text on the screen. And also the screen reader language that they needed to read off with it. Another thing to note is that we wanted to use language that was generic enough, so if someone used VoiceOver or NVDA, it wouldn't really matter which screen reader they were comfortable with. Moving on to our second set of tests. For our second round, we used an Adobe XD Prototype to act as the screen reader, instead of having Courtney be the voice of the screen reader. We wanted to give a shout out to Isaiah Wright on our team for helping us come up with this strategy. But we wanted to test this out because we knew that Adobe XD could better mimic what a screen reader sounds like, because it would be that more robotic voice instead of a human voice. And the way this works is that you can add hotspots into the prototype tab on Adobe XD. So when you tap on an element, you can program a speech playback event and type in the screen reader language that you want it to read, so someone on your team still needs to control the Adobe XD Prototype and click on the different hotspots as the participant tells you how to navigate. And I'm gonna play another audio clip for you for what this test sounds like. So first you'll hear the participant's voice, and then you'll hear the more robotic voice, which is the Adobe XD Prototype acting as the screen reader. - Next. - Available balance. Select to define $100. - Let's go to the next element. - Current balance. Select to define $100. - And next element. - Interest year to date, $2. - Next. - Annual percentage yield, 3.62%. - Next element. - Account details. Button collapsed. - And let's expand that one, please. - Account details. Button expanded. - So as you can hear, that sounded a lot more robotic, like a screen screen reader and participants really thought that this was a screen reader. And they told us they found it easier to interact with than when we used a human voice as a screen reader. However, there were some drawbacks to this method, like you can't customize it and change the speed of the voice. And also since participants thought that this was a real screen reader, they would sometimes get caught up in little inconsistencies about how the screen reader would read off certain elements. It didn't sound exactly like NVDA or exactly like VoiceOver, which was really good and valid feedback, but not always exactly what we were looking for. But we did find that participants were more forgiving with screen reader language inconsistencies when it was a human screen reader. So Courtney's gonna talk about that a little bit more, but I'm gonna pass it off to her to talk about what it was like to actually be the wizard in these studies. - Great. Thank you Annabel. So as Annabel mentioned, we had those two different studies. With the first one we did use my voice as that simulated screen reader. And then for the second study, we evolved to using that Adobe XD Prototype as the voice. And as some of you may know, screen readers, they convert that HTML code from a website into audio output so that you can hear everything on the page. So in becoming the screen reader myself, I tried to become that screen reader that I listened to on a daily basis in my testing, which is taking on a really robotic persona and mindset, responding in really robotic and consistent ways, exactly how a screen reader would. Paying close attention to my volume level, clarity, consistency. And then we did notice that a few participants preferred a "Say all" command, which is where you read the entirety of a page, before moving on or pausing it. So when we discovered this we went ahead and recorded my voice reading off the entire first page, and then we were able to press play and have that read out to kind of save my voice a bit. Thanks Annabel, we'll go to the next slide here, which is unplanned scenarios. Of course, when you try a new research study for the first time you're gonna have different surprises or unplanned situations arise. So the first one was unexpected routes taken, meaning there's a happy path, of course, which is that default scenario or the most likely positive outcome route that a user would take without any errors. However, that's not the only path that folks are going to take. There's no one size fits all for accomplishing a task. And so when you give participants an end goal, they're gonna take several different paths in order to accomplish that goal. So as the wizard or the screen reader, just really being on your toes and being prepared for the user to go in any which way direction in order to accomplish that task. We also saw some very specific key commands requested. So in that case, I would do my best to respond to them. If it was something that we were unfamiliar with or something that I simply couldn't replicate with the conditions, we would go ahead and just pause them and ask for clarification to what they intended that operation or that key command to do. And see if we were able to simulate it for them from there. And then for our next slide here, some other challenges or surprises that we faced. So some different hot key possibilities between the different screen readers. So just paying attention to all of those. Differing expectations on your screen reader behavior. So for some folks, they might have pre-programmed their screen readers to read at a certain rate, a certain pitch, maybe even a certain accent. So those were things we couldn't quite replicate, but we did our best. And then listening between participants' key commands and then also their dialogue for their thought process on how they thought the experience was going. So as the wizard, I needed to pay close attention to what was a key command, meaning was a user requesting me to go to that next heading level, or were they just having a dialogue with Annabel, the moderator, on what they thought should be the next heading level. So paying really close attention to that. And then participants weren't always as comfortable voicing off those screen reader commands. And when that happened, we would simply just walk them through it and give examples as to what a potential command might be. For example, "Skip to the next heading level" or "Move to the next interactive element." So we would just give them suggestions and kind of walk them through it. And we found that as soon as we did that, they were able to move through and try out our process. And then moving on the next slide here is being a participant. So Tim was actually a participant in our study. And so he is going to touch on what it was like for him. Thanks, Tim. - Sure. - Well, here are some things to be mindful of, of your participant or also just making sure that you're participants are aware of these type of things. So the first thing is to listen carefully. As already mentioned by Annabel and Courtney, this is not gonna be, this is not going to mimic screen reader experience in exact detail, right? Because as they noticed some sort of the people who participate, they noticed it didn't say the same exact thing that NVDA might say, or JAWS or whatever screen reader they're using. So it's important to listen carefully, because often the information may be being conveyed to the user, but just in a slightly different way. So it's important to be attentive and listen. So one of the things you wanna do on this process, and this is the part, these next two are actually the things that really, really make this particular approach useful is you talk about, you're supposed to think aloud. So think aloud and mention what you're doing, what, "Okay. I want to, I think I'm gonna move through the page by heading to find this section I'm looking for." And talk about what you're doing and what you're hoping to find and why you're thinking that way. And, in particular, whenever something doesn't seem to work out, you say, "Okay, well I want you to click the button." And then the result you get is not what you anticipated, mention that in as you're talking about, Okay, I was expecting this to happen or this to be said. And again, those type of things are the, in some of the goals you get out of this kind of process, because then suddenly the researcher are getting a much clearer idea of not only whether something went right or wrong, but why might it have gone well or poorly? And of course the last one is, and then again, if you're a participant and if you're working with participants, make sure the participants know they can ask for those instructions over and over again. One of the things that I know I run into with this process is you get so focused on thinking aloud, you're walking through the page, you're focusing on what have you heard so far, what's coming up next? What are you looking for that sometimes you'll forget the details of the task and so you'll need a reminder. And the only thing you wanna make sure is the participants that are comfortable and know that they can ask as many times as they want and can have you repeat those instructions. I know, again, I sometimes will ask again before I say I'm done, because it's like, okay, I know I'm at this right spot, but I can't remember if I needed to have a specific number I'm supposed to have at this point, or I've forgotten, because I got on the journey, I got focused on something else. So that's always something to keep, make sure that the participant knows. Thanks, I guess we'll go on to the next slide. - Great. Thank you so much, Tim. And onto some challenges is some broader challenges that we faced in this study. We did notice that participants tended to get caught up on that screen reader, technicality of the language used, especially when we used the Adobe XD Prototype, because it sounded so similar to a real screen reader. While we got a lot of great feedback on the actual flows we were testing, we also that we received a lot of feedback on those functionalities of the screen readers that we were simulating. An example of that would be if a dollar amount was read out and perhaps the decimal point itself wasn't read out through the Adobe XD Prototype, then participants were calling that out. We did find that they were a bit more forgiving when it was my human voice, as opposed to the Adobe XD Prototype. And then the next challenge that we also faced, we'll just move, thanks. Is time to create and practice for the test. So it does take a good bit of time to build out that prototype, to annotate it. And then when you're using that Adobe XD Prototype to set up those hotspots for navigating. And then of course we did have multiple practice sessions for both the human voice and then the Adobe XD Prototype research study to make sure that we had accounted for several different scenarios of possible routes people would take. And then another challenge that we came across was some screen reader limitations when it came to very specific key commands that we just weren't able to replicate. An example would be a very specific table mode if a participant wanted to go into this, we would simply pause, ask for clarification and what would they expect to happen? Now, this didn't come up very often, but it is just something to note as a limitation. And then another challenge was we aren't able to really replicate those very custom screen reader settings, like setting up the Say All-command. We did ask participants ahead of time for the Say All-command, which is reading out everything on the page at the beginning. So we overcame that one, but there are some other custom settings like pitch, rate, the accent again of the screen reader, the participants like to set up themselves. And so it might sound a bit different when we are simulating it for them. And so we have a quote from a participant from the study that I'll read off here. It says, "People who use screen readers are very used to their screen reader and the certain voice and rate. And so Courtney did a great job. It's just a little interesting. It kind of threw me off my game just listening to someone else read it for me, because I have all of my settings adjusted and I can go faster or slower." So that was just an interesting little takeaway there. Thanks Annabel, and then we're gonna move on to some more general takeaways from this study. Prepare for the non-happy paths. We would definitely recommend that if you ever want to replicate the study. Of course, you're gonna prepare for those happy paths. Nice and easy, those are the routes where there's no errors or tricky situations popping up, but for non-happy paths, we'd also recommend preparing for those, because like we mentioned, users can take a wide variety of routes in order to accomplish the task, however they see fit. And so you do wanna prepare for those. You don't have to build everything out, of course. In this prototype though, because it is a prototype, it's not the final version. So you can have a conversation with your team about what would need to be built out and then let the go through the flow. Let them know if something wasn't functional, if they just weren't able to interact with it. No problem, go ahead and open up a discussion. What would you expect to happen if you clicked on this button? And it's not currently operational, but what would you expect to happen? And you get awesome feedback that way. And then another general takeaway is the majority of participants, they really did pick up the concept fairly easily. So they were able to transfer from their everyday screen reader over to my voice or that Adobe XD Prototype. Sometimes, with the first task, participants wanted to go a little bit slower as they understood it, trying to figure it out. But then the majority of participants picked it up really quick and were able to use it seamlessly. And we did get feedback that the participants really appreciated being a part of this study and that their feedback was really valued. Another general takeaway was that it was really helpful for us to have a separate wizard and moderator so that you can really have that focused attention on, for the wizard, responding to the key commands. And then for the moderator gathering that detailed feedback from the participants, documenting it, asking them clarifying questions. And for our research study, for our purposes, it worked out really well that Annabel has this UX research background. So she was able to have those dialogues in a really meaningful way. And then myself, I do have experience with screen readers and I am certified as an NVDA expert as well. So I'm going to pass it over to Annabel here, who is gonna touch on a few test specific takeaways. - Thanks Courtney, so I wanted to give you an idea of a few of the valuable findings that we found through these tests. And while we found these changes while we were testing with screen reader users, often they improve the experience for everyone, which is really great. So the first one is when items are in the same categories, they should be built with the same role. We had a test where you could select different types of accounts and if you had already selected an account that one was built as a checkbox, but if you wanted to add another account, the ones that were not yet selected were built as buttons that opened a drawer. And this was confusing to participants. To have options that were in the same category, but that were built with different roles. And so in this scenario, it probably would've been better if we built all of the accounts as check boxes and just had some that were selected and some that were unselected. The next point is, make it clear when users need to select a "Save" or "Submit" button. So we found this really valuable, but we had a page where participants were updating some settings to their account and it was a flow where you could add things or remove things from your account. And after you added or removed anything, you got an announcement that said this was successfully added, or this was successfully removed. And since participants got this confirmation with an announcement, they thought they were done and didn't realize that there was a save button towards the end of the page. And this makes a lot of sense because once you get that announcement that this was added or removed, you would think that you were done, but if you left, your changes wouldn't have been saved. This was a pretty easy tweak for us, because we could just add some language into the announcement that said, this setting was added or removed. So we could update it to say the setting was added, continue to "Save" button, and then participants would know there's a next step. They need to go to that button to save. So, as we were talking to participants, we found that this was a common frustration with sites. Some pages require you to save while some pages save automatically. So it's really important to make it clear to screen reader users when they need to save and when it's done automatically. The next point is don't make users listen too much, before hearing the role of a component. We had a checkbox with a really lengthy label. So participants had to listen a long time before hearing that it was a checkbox. And so this means they were listening to the content for a long time before understanding how they could interact with this part of the site. But if we had a shorter label so that they could hear the label and that it was a checkbox, sooner. And then if we built the rest of the text using hintText or aria-Describedby, then the rest of the text would read after the label and role of the element, so they could hear how they could interact with it sooner. And the last point is, in a multi-step flow, give announcements when a task is completed. So we had a multi-step flow and we got a lot of positive feedback when there were announcements such as, "Step one, personal information completed." Or, "Step two, current step." So giving users some context about where they are in the flow, what they've completed and what still needs to be done, was really beneficial. And moving on to some tips, if you'd like to try to create a Wizard of Oz test. The first tip is, prioritize testing on pages with new patterns, so like we've mentioned, it does take some time at the beginning to set up these tests. And so it's important to think about which pages would be the most beneficial to test, because you probably won't be able to do it on all new projects. We found it to be really great when you're testing a new pattern that would go into your design system. So for example, if you're building a new auto suggest pattern, if you test the functionality on one project, it will help extend the findings to other projects that also use that pattern. Landmarks, headings, and links are really important. So we found that a lot of participants use their hot keys to hop between these elements. And so it's important to make sure that your wizard is ready to hop to the next heading or hop to the next link if a participant asks for this. Ask participants about their basic screen reader preferences. So, we can't customize everything in this experience, but there are a couple questions we can ask participants before we start the test to make the experience a little bit more personalized for them. So Courtney mentioned this, but we found that some screen reader users prefer to have their screen reader, read everything on a site continuously until they tell it to stop, while some prefer to progress through the page, just going one element at a time. So a little bit more slowly, so we can ask participants if they wanted the screen reader to read everything until they tell it to stop or to read one element at a time and be ready to do either of those settings. And then we could also ask about which hot keys they use most often and ask for clarification if we weren't sure what one of those did. It's important to be flexible. So you're likely gonna go back and forth between having a conversation with participants and having them voice key commands. And this is really what you want. You wanna have that conversation with participants about what they're thinking and feeling about the prototype and the process. And then some times you might have to help them shift gears, going back to giving those voice commands and interacting with the screen reader or the prototype. Lastly, if you have time, it's really great to practice with someone who isn't familiar with the prototype or project, like maybe someone on a different testing or quality assurance team. You also wanna make sure you're practicing with someone who knows how to use a screen reader. And if you do practice with someone who is sighted, ask them to dim their screen. Dim their screen if you're sharing anything over Zoom, so that they're getting the experience just through voice and audio for the first time. So that is all that we have. Thank you so much for your time. And we wanted to open it up for questions. - There have been a lot of really great questions coming through in the Q and A section. So I'm excited to dive in. One of the top rated questions was, how long did it take to set these tests up? And when would you prefer to use a code prototype versus Wizard of Oz testing or vice versa? - Yeah, so for our first test it did take some time to set up, because we were working with a design team. So we needed to work with them for when the comps were ready. And then I added the annotations on top of those, which it takes some time to document the accessibility for a project. And then in addition, this was a new experience for Courtney and I, and so we wanted to practice before we put it in front of participants so that we were prepared. So it did take some hours to set up. I don't know, Courtney, if there's anything else you wanna add to that? - Yeah, yeah. For the first one, your spot on. It took us a bit longer to get everything nailed down. A few weeks and then once we had the second one roll around, we we're much more familiar with how to organize things. And then we were able to build out the prototype fairly quickly. So that one was closer to maybe like a week or two to set up, so the second one was a lot quicker. - And then I think the second part of the question was about using a code prototype versus this early prototype and I think, if a site is already built out in HTML and it's live, then obviously that's a great one to test with screen reader users, but this is really great for the beginning of a process. So when you're working with designers. And when like your research team is conducting usability research with all participants, it's a great time to do a Wizard of Oz study, like this. - Great, thank you for sharing your expertise there. Next question is how does one become a screen reader expert? There are lots of questions around, what do we do if we don't have this expertise on staff and how do we train it up? - Sure, so I started using screen readers and learning how to use them probably about just over three years ago. I do use them on a daily basis. And it's really something that you can pick up, just start using it on different websites that you go to, start learning the different shortcuts. In terms of that certification that I got. That was through NVDA, so that is one of the commonly, most use screen readers across the world. And so NVDA has a certification that you can get. So that is something that I picked up in the past year, and that really did help for this study in terms of different key commands that would come up during the study. - One other question that came up was if you considered at any point building a prototype, for example, in PowerPoint, which already has screen reader support and what your take on that sort of approach would be. - That's a good question. We did not consider building the prototype in PowerPoint. I think our team, our UX team often uses Sketch. And so it made sense to us to leverage what they had built in there, but that would be something I'd be interested in exploring. That's a great idea. - Yeah. - Great. Some more questions. Particularly around the screen reader expertise. Do you all have any suggestions, if the screen reader expertise isn't present on staff of organizations that they might be able to reach out to, or any particular resources for getting more involved in that direction? - Sure, so in terms of gaining more screen reader expertise, other than practice, I would go directly to the screen reader website. So for myself, I use NVDA and JAWS, J-A-W-S. Those are the two primary screen readers that I use on a Windows computer. There's also VoiceOver on Mac, which would be the third most popular screen reader, according to studies that have researched globally. And I would go directly to those websites to get the best practice. And they will give you those sheets on what the commands are. And I would just continue to practice with those. And you can definitely reach out to any contacts on those websites as well, in order to gain further clarification. There's also a lot of tutorials and How-to, in order to ramp up some knowledge there. And I believe DQ hosts a lot of those videos on their website. Thank you for the question. - So a slightly different direction. What would you say are some of the major limitations of this method? - I think a major limitation that we've talked about is just not being able to replicate those custom settings of a screen reader and we can't be prepared for every command. And I think maybe another limitation is just that the turnaround time isn't always as quick as the turnaround time for maybe another usability study that someone's running. So you just need to be prepared to work with the designer and the team early on in the process, so that there's enough time to incorporate that feedback before it goes into development. - Right, and kind of going off of that, since this does take more time and possibly more resources, what you could do is take a look at, what are all the different flows or experiences that you're currently building out, and which ones would you prioritize in order to get this type of testing? Maybe not all of your flows need this type of testing, but perhaps those that are highly prioritized would really benefit from this type of testing. - Great. Carrying on a little bit in that direction. So I know you touched on this, but in terms of your main goals for what you were hoping to learn from this, do you feel like that was accomplished and what were some of your main takeaways? - Yeah, that's a great question. I think our main goals were making sure that we were testing with people with disabilities and getting a really diverse range of feedback to make sure we're building really inclusive products. And I think one of the main takeaways for me is that Courtney and I were really familiar with the Wicot guidelines and inclusive design. And so we can give a lot of feedback on projects for accessibility, but we do have limitations. Our eyes do get in the way. And so there are always gonna be nuances that we don't pick up on, because we are not blind and we are not primary screen reader users. So we just really learned about the importance of making sure we're testing with everyone. - I'll also add on this as someone who uses a screen reader, there were surprises in the testing for me as well. So it's not, the things you learn from this are about how real people do use a screen reader in an as real environment as possible. Even if you're a screen reader user, you're not gonna always anticipate those kind of issues. So again, that's part of the reason why doing this kind of study where you have somebody really doing, performing the actions, going through the task is of a particular benefit where you can have the heuristics and those stuff, but you're not gonna always anticipate every possible issue. - Great, so a question just came through that is a really interesting one. What deliverables do you share with your developers based on what you learn from these tests? - Yeah, that's a great question. I put together a report out deck and actually shared it with the design team. First, what we did for the test and what our specific findings were, along with audio clips from the test and quotes from users. So they could really understand why we were giving that feedback. - Great, One other question regarding software. So I know you mentioned that you utilized Adobe XD. Are there any other software tools that offer similar functionalities that you would recommend? - I, yeah Adobe XD seemed to make the most sense to use with voice. I'm curious in researching the PowerPoint functionality that someone else asked about, but I would say that was the main one that I thought of. And if you wanna do more of a human screen reader, you could use any design software, like Sketch or Figma to write those annotations. - Yep yep, with the Adobe XD that seemed to work really well because you are able to incorporate those different hotspots and simulate clicking a button, or maybe going backwards. So it's that really like interactive piece of the prototype, which I'm not sure that you would get with PowerPoint. And then, yeah, as Annabel mentioned, you could really use anything to just mock up what the screen would look like for the prototype when you're using a human voice, because that allows you to go all over the page with the human voice. - Something I'll mention too, right? Is in this particular, this is more of a long term, more strategic viewpoint of this, right? At this particular time, Courtney and Annabel had to use existing tools to try to do this newer kind of approach to doing the prototyping. It's possible that maybe at eventually down the road tools could be built out that make that whole process a little bit smoother, right? As opposed and help maybe guide that, make it a little faster to put these things together, as opposed to start from scratch with a tool that will help with the process, but may and may not be specifically designed to anticipate all the specific things you're wanting to accomplish. - A great seed to plant for all the accessibility leaders out there listening in. We got some questions around the logistics. So it sounds like you conducted this via Zoom. Any takes on conducting this via Zoom versus in person and what you see as some of the potential trade offs or benefits there. - Sure. So we did use Zoom primarily, because we did this during COVID times. So of course we had a hiccup or two with just kind of getting onto Zoom and getting started. If we were to conduct it in person, we would still need to maybe separate or set up kind of that wizard experience where at least they know that it's not the real deal, that it's a simulated screen reader, but for the purposes of this study, Zoom seemed to work really well to sync everyone up. Anything else to add there, Annabel? - Yeah, I think you covered it. Another thing, Courtney and I were sharing screens on Zoom, because we needed to see some things, but for participants who are having trouble joining the Zoom, really they could just call in with their phone, so it's not important that they join through their desktop. So that's just something to know if you do it over Zoom. - And then we probably have time for about one or two questions more. So I'm gonna throw this one in the coop. So talking about fidelity of design, how would you impact using this for different fidelity levels, for example, wireframes versus more high-fidelity mockups? - I actually think it's good for all phases. I think doing this when you have low fidelity is almost maybe easier, because there's not much that's been built out yet and you probably just have some basic content, so I would say that this process could be applied for low-fidelity and high-fidelity. - And then last question is just, we received lots of questions about getting participants. So the main question is just, what tips would you provide to people who are trying to recruit participants? - Yeah, recruiting is tricky. That is something that we had trouble with, but we were able to kind of lean on our community partnerships and also our relationship with DQ, for recruiting. - Yeah, and we would recommend to the best of your ability, trying to recruit for this type of study, users of varying experience levels with NVDA or screen readers in general. So perhaps those who are new to using it, and those who are maybe more really familiar, possibly expert level, screen reader users who maybe use it on a day to day basis, just so you're getting a variety. We did end up with a variety of screen reader users in our study, and we had a handful of really technical, well used screen reader users. - I definitely would second that, as getting a variety is good to have, because then that gives you a much better understanding of how well the interface is gonna work across that variety, because that's gonna be in reality, that's gonna be the, there is gonna be a broad, all screen reader users are not expert users. There is definitely a broad range of abilities amongst screen reader users. - It looks like we do have one minute left. So I'm gonna throw in a final one. How will you innovate on this approach for your next round of tests? - Yeah, that's a great question. I'm interested in going back to having Courtney or someone be the a human voice of a screen reader, just because I think we could test projects faster with that and also just because people understand that this is more low-fidelity and we might get more feedback on the actual flow and process. - With that, we do have to wrap up, but a tremendous thank you to all of our presenters. Tim, Annabel, Courtney, thank you so much for all you are able to share with us today and thank you to all participants for sharing questions and driving the conversation. - Thank you so much. - Thanks.