Skip to content Skip to sidebar Skip to footer

Usability Script and Moderator’s Guide: 2 Tools or 1? Communicating with Study Respondents

One of the tools most moderators use during usability tests is a usability script or moderator’s guide. The script is an essential aid to communication. For the moderator, it provides a way to impart key information. It also helps to ensure that each participant receives the same instructions, thus maintaining consistency throughout the study.

The script serves as a guide to the usability session; it directs the participant to conduct certain activities, prompts with questions, and generally shapes the conversation between moderator and participant.

As well as guiding the session, a good usability script contains information intended to put participants at ease; testing should involve putting the product, not the participants, through the wringer. A usability session should be a comfortable and painless exercise. The sheer frequency of conducting sessions may cause moderators to become lax about imparting information intended to put participants at ease, such as statements about the test being focused on identifying problems with the product, not on a participant’s skills, and about the structure, conduct, and duration of the session. A good script reminds the moderator to provide this comforting information, and suggests, or prescribes, appropriate wording to do so. It’s a good idea for moderators to stick closely to the scripted text in these circumstances.

There are occasions when a script in its entirety should be rigidly followed; for example, if a truly rigorous comparison of outcomes is to be conducted. Relative newcomers to the usability moderator’s role may also find that sticking tightly to the script provides them with a degree of comfort.

Going Off-Script

Most experienced moderators will know that it can be appropriate to deviate, sometimes extensively, from the script. The ability to explore unanticipated events or pathways exposed during a usability session can sometimes be more important than rigid adherence to a predefined course.

I often find that it doesn’t occur to me to ask certain questions until I see how users interact with the test site. Since each individual has a unique perspective, each session can represent an opportunity for unexpected findings. While unexpected events can be unsettling at times (particularly for those less familiar with moderating), they also represent a rare opportunity, and the experienced moderator will welcome them.

Because usability tests are unpredictable, some questions cannot be crafted in advance. This is even truer when conducting contextual inquiries, which are often formative, rather than evaluative. For example, as part of a study completed in 2010 for a government organization, the Mediabarn User Experience Lab interviewed employees who regularly used two software tools to help accomplish their daily work. Since users had been using these tools for close to a decade, it was logical to have them demonstrate for us how they accomplished a set of tasks with the tools, as opposed to the moderator dictating the order in which the tasks were to be completed. During the contextual inquiry, it was important for us as moderators to be able to “let go,” so that users could guide us through the tools, all the while uncovering their insights and workarounds. Conducting the sessions in this largely free-form way gave our client a more well-rounded understanding of how employees were actually using the applications.

When a Script is Not Possible

While researchers may choose to be flexible with their scripts, there are times when they simply don’t have the option to work in any other way As part of a mixed-method test Mediabarn completed in 2009 for a GPS-related study, we conducted a series of ride-along usability tests, traveling with users and exploring how they get around using maps and directions. We needed to understand not only where usability issues arose while using the website, but also what happened once directions were received and had to be put to use on the road. The ride-along tests were the third portion of the overall study. (The others were focus groups and interviews.)

While we had been able to use fairly standard scripts to conduct the focus groups and usability tests, there came a point when it became apparent that a traditional script would not work while in respondents’ vehicles. However, we still needed to have some structure to the test sessions.

We began each session in a lab setting, where respondents received instructions and information about the study. They were able to receive their route directions much as they would normally, prior to getting into their vehicles. After a few minutes of preparation time in the lab, we headed outside.

We literally created a mobile usability lab within respondents’ vehicles and had them follow the directions printed out from the website or viewable on their iPhone through use of an application. Given the potential safety hazards inherent in conducting this type of study, these sessions were truly observational so as not to add any further distractions.

Sessions were, by their nature, user-directed. To offset this, we used an outline of expected outcomes plus a list of follow-on questions for each stop along our route. Most of these questions focused on how easy or difficult respondents found it to follow the directions, and there were some specific probes on perceptions of accuracy and efficiency. However, we could not always predict how respondents would interpret and process the information at hand, and there were definitely some occasions in which we went off-course. For example, some respondents missed a turn along their route because they had difficulty interpreting the distance associated with a step in the directions. They didn’t know if the distance listed there referred to how far they had already traveled or how far they needed to go before the next step. These “mistakes” provided jewels of information. By simply sitting in the backseat and allowing users to guide us through the study, we uncovered answers to questions that had been puzzling the developers, and gathered great insights that led to actionable recommendations.

The beauty of this study was the ability to remain flexible, which is often a requirement when working outside a controlled lab setting.

Balancing Rigidity and Flexibility

Each respondent is unique; some require quite a lot of hand-holding, while others can be let loose earlier in the game. Depending upon their background, some respondents need the moderator to walk them step-by-step through the process of usability testing, while others prefer to take the reins. While it’s important that each respondent completes the tasks associated with the test, my experience has shown that it’s not necessary to set up each task in precisely the same way for each user. Each participant approaches a website with his or her own perspective, so adjusting the way in which the test questions are asked to suit each individual seems more natural than trying to fit each respondent into one mold.

There are some inherent dangers in not following a script. Following the script when giving instructions to participants can avoid problems such as leaving out important instructions, giving away too much information, or creating inconsistencies between respondents. There are also times when asking questions in a specific order is necessary, especially when testing a linear process. This can be very important when testing clickable prototypes with limited functionality.

On the other hand, there are many benefits to conducting sessions that don’t necessarily adhere to a specific order, but simply follow the way the respondent naturally stumbles upon things. Additionally, there is enormous potential in including unscripted follow-up questions that stem organically from the conversation that takes place between moderator and respondent.

Reading from a script can also cause a disconnect between the respondent and the moderator, making the moderator seem, or feel, cold and mechanical. A less obviously scripted discussion can relax the participant and promote a free-flowing conversation.

Essentially, the script should act as a communication tool that enhances, rather than stifles, the conversation. With a good understanding of the objectives of a study, a moderator can often bounce from task to task within the session, based on the direction the respondent wants to go. For example, a respondent who is evaluating an e-commerce site and is completing a task related to shopping for apparel may spontaneously want to add an accessory to the purchase. Though that may have been a task included in the guide, perhaps it was a task that was intended to be completed much later in the session. Rather than stopping the respondent, far more information can be uncovered by going with the respondent’s flow even though it is “out of sequence.”


Considering the script as a guide does not imply that there should be anarchy during the usability session. On the contrary, it is important for the moderator to maintain a modicum of control, stay on target, and address all of the tasks within the time allotted.

A good moderator uses a script to support the conversation with the participant. When used well, it allows the moderator to respect and learn from the participant’s freedom, while still providing the necessary controlled environment.可用性脚本是主持人和参与者之间的一种主要沟通工具,但研究人员不应依赖脚本。其实,应该利用脚本将重要信息以一致的方式传达给测试参与者,并且主持人应将其作为一种工具来在整个测试过程中引导受访者。优秀的主持人会使用脚本来支持与参与者的对话。如果使用得当,脚本有助于主持人尊重参与者的自由发挥空间,并从中了解相关信息,同时仍提供必要的受控环境。