Training users on a new software package can be very expensive and should be a consideration in software development. When users cannot be pulled from their work areas for long periods of time to learn new packages, companies often use a pyramidal training model in which super users at each level receive training and then train others in the level below them. Ease of learning and recall during training is a significant part of usability for such users, and testing with them should take this into account.
While testing a software package used for managing academic conferences, we used this user training scenario. Our test included traditional roles, but also allowed us to verify memorability and learnability because the user reversed roles near the end of the period and trained one of our team members.
Although fixed roles help to structure the testing process, workplace roles are not always fixed. Our approach in the lab simulated one form of software training used by many organizations by varying participants’ roles in-test.
Some usability methods address collaborative training by adding extra participants to play additional roles—for example, pairing neophytes with experts in collaborative walkthroughs or team tasks. Another method, called “teach-back” or “train-the-trainer,” is used in public health. In this method, the participant teaches the facilitator the content she has learned. When patients can explain the treatment plans or procedures they have just discussed with their doctor or nurse, practitioners can see how much the patients have learned and understood. We applied similar reasoning to our test design.
Keeping the standard roles through most of the test allowed us to maintain some control over the test and also glean usability data regarding efficiency, errors, and satisfaction.
Reversing the roles during the last task then provided qualitative measures of learnability and memorability that we would not have gleaned from the standard protocols.
Tests that yield data on learnability
and memorability can provide companies with greater insight from their testing. Companies using teach-back in their own tests might discover how to improve training for real and not just ideal users. Such tests may also help companies to anticipate real users’ issues with learning and recalling product features.
Applying the Method to a Real Test
We recently applied this method in one of our software usability tests. Between 2007 and 2008, members of our American team worked virtually with a German software developer on documenting and testing an academic conference management package. The software is an open source tool and undergoes continuous development. Typical users of this system are faculty members, graduate students, and department staff, and because pilot testing suggested that these groups all use the software similarly, we focused on testing with faculty members.
Our central test question was how usable faculty members found the software while they performed routine conference management tasks. At our institution, previous testing on the software had sought to improve its documentation. Our most recent test, however, focused on the software itself. We wanted to find out how the product’s design and architecture supported or obstructed its use.
For our test, we recruited five full-time faculty members to evaluate the software. All participants had between one and twenty-three years’ experience managing academic conferences, but only two had prior experience using software to support that work. Our pre-test screening ensured that these users were familiar with the tasks involved in administering a conference and managing submissions. Before testing, we also populated the test software with data from a recent conference. Our participants were familiar with the subject and so the familiar data helped create a realistic experience during the test.
Building Learnability and Memorability into a Test
We designed our tests around the following four tasks:
|Section 1||Task 1||Assign an incoming paper to a reviewer|
|Task 2||Create a conference session|
|Task 3||Assign an accepted paper to a session|
|Section 2||Task 4||Teach a graduate assistant to assign an incoming paper to a reviewer|
In the first test section, one team member performed standard, scripted facilitation and participants worked through three logically related conference management tasks, each building on the last.
In the second part of the test, roles shifted: in the final task, participants taught the facilitator how to complete the first task. This required them to remember how they had completed Task 1, and demonstrate their recall by teaching someone else how to complete it. An average of twelve minutes lapsed between users completing the first task and beginning the fourth. This time lapse gave participants enough time to focus on other aspects of the program before trying to teach.
Parsing Learnability and Memorability from Test Data
To support observation and analysis, we video recorded the sessions using two movable cameras and one fixed, overhead camera. We focused the movable cameras on the participant and the facilitator so that we could capture their expressions in either role. We also video captured screen data while participants used the software.
To analyze the data, team members reviewed the session videos individually, and then met to complete an affinity diagramming session together. We watched the sections of the videos that corresponded to the first and fourth tasks and noted individual points on separate Post-its. We then organized observations into groups of similar content, reviewing and rearranging them into emerging categories.
Through this inductive process, we developed seven themes in the data:
General efficiency, error, and satisfaction issues:
For example, the data we grouped under “confidence” showed that participants recalled and were able to teach how to assign a paper to a reviewer; it highlighted the users’ confidence levels with the specific task. On the other hand, “feedback” data points highlighted participants’ approach to teaching and offering teacher-style feedback to their “graduate student assistant.” Thus, the data in this group gave us insight into participants’ general attitude to the new teaching role.
All participants but one moved from a position of confusion to a position of confidence during this round of testing. While working through Task 1, participants tentatively explored the program and expressed their uncertainty with it. By Task 4, however, they confidently gave our team member directions and encouraged her through the task. This learning process took place despite the fact that more than half of our users found the software to be complex and non-intuitive. The program has a dense menu structure, some menu labels change between screens, and important buttons are unusually positioned. Without the role reversal method, however, we would not have been able to so clearly isolate and identify issues regarding the software’s learnability and memorability
Evaluating the Method
Like all methods, our approach has potential advantages and disadvantages:
- This usability method requires no additional testing costs and is simple to implement.
- Changing roles can indicate how comfortable participants are with the product. That in turn gives a measure of the product’s learnability.
- This method can help usability teams to gauge how well users can navigate within the software package, because the users have to direct someone else through menus and fields.
- This method may be ideal for testing documentation solutions because some participants may want to verify their instruction using existing documentation.
- Some users may be either so uncomfortable in a teaching role or so nervous during testing that they won’t remember the process they used to successfully complete a task.
- Role-switching may give users the feeling that they are being tested, rather than the software.
Usability professionals applying this method should also consider the following tips:
- If testing a software package, facilitators should return users to the program’s home or index page before allowing users to begin teaching. This avoids implying where participants should go to complete their next task.
- During the role-switching task, the facilitator must avoid leading the participant. Facilitators can minimize bias by keeping the mouse still, making no suggestions, and answering, “What would you like me to do?” if the participant does not instruct them.
- Role-switching will only provide representative data if there is a time lapse between the tasks users complete as participants and the task they complete as facilitators. Users should be given some time between learning a task and attempting to teach it.
- If a participant cannot perform the initial task where they learn to use the software, they will probably also be unable to complete the teaching task. Usability teams should consider designing a second, comparable task for users to learn and teach in case of an early failed task. Built-in redundancy on a critical task can help ensure that teams receive the data they need most from a test.
Applying this method again with modifications could indicate whether participants might perform better on learnability and memorability tasks if given advance notice that they would teach other users. More difficult tasks and more complicated software than those we tested could prove less compatible with this method, and further applications of the method among non-academics would indicate whether it can work as well for corporations as it did in a university setting.
Switching participants’ roles mid-test responds to the fact that workplace roles are much more flexible than traditional lab usability testing acknowledges. Reversing roles in our test allowed participants to demonstrate the kind of routine role-switching that occurs when one user has to train another. Additionally, this method provided us with significant data on learnability and memorability that otherwise would have required recruiting additional participants.
Retrieved from http://uxpamagazine.org/testing_ease_learning_recall/