Keeping Your Distance: Remote usability testing or the lab?

Many people find it’s tough to do their best when they are being watched. Psychologists call the stress and task impairment that ac- companies being observed while executing a task “performance anxiety.” As usability professionals, we need to be concerned with performance anxiety as we continually observe the performance of technology users to learn how to improve product design.

 

Recently, remote usability testing has become popular. Remote usability testing uses an audio link (typically a conference call) in conjunction with “screen sharing,” using technology such as Microsoft NetMeeting or Live Meeting). This combination allows the usability engineer to observe the performance of test subjects in a “virtual laboratory.” Remote testing eliminates the physical constraints and many of the capita costs of testing in a physical laboratory. Yet, two questions that have received little attention in this move towards remote testing are that of performance anxiety and quality.

Does remote testing produce any more or less performance anxiety than laboratory testing? Does it have any significant effect on performance compared to laboratory testing? Recent findings from survey research conducted at Autodesk, Inc., suggest that there are no significant differences on either performance or data quality when compared to laboratory testing.

In January 2002, the Autodesk team responsible for Internet-based tools and services decided to emphasize remote usability testing. This made it easier to include people who were not within a comfortable driving distance of Autodesk’s California headquarters and to increase their ability to reach international customers.

Autodesk conducted usability sessions with approximately 150 of their customers. About one third of those customers (50 users) were tested remotely via the screen-sharing application Live Meeting  and teleconferencing. At the end of the year, Autodesk conducted a survey to determine if the nature of the experience differed between remote testers and local testers.

Sixty-one people – about half of those tested – responded to the survey Approximately 10% of the respondents had been tested at Autodesk’s usability laboratory while 30% of the respondents had been tested remotely.

Remote and local testers were satisfied with the testing process. There was no statistical difference between the laboratory and remote groups for the following questions.

  • The majority (67%) of the survey respondents strongly agreed with the statement: l enjoyed participating in the usability session.
  • When asked about how they felt after the usability sessions, the majority (41%) of respondents strongly agreed with the statement:  I provided useful feedback.
  • The majority (75%o) strongly believed they were honest and frank with their comments.

Remote and local testers felt positively about their own participation and about the behavior of the interviewer. There was no statistical difference between the laboratory and remote groups for the following variables rated on a scale of 1 to 7, where 7 is “high”:

  • Testers reported that during the test they made an effort to be intelligent (mean 5.T3), articulate (mean 5.82), consistent (mean 5.62) and nice (mean 5.45),but not biased (mean 2.82).
  • The usability facilitator (interviewer) , was seen as unbiased, intelligent (mean 5.94), articulate (mean 6.05), consistent (mean 5.82) and nice (mean 6.36).

The findings from this survey study suggest that customers who participate in usability testing remotely via screen sharing are as candid and comfortable with the experience as customers who participate on-site, and that data collected remotely is not different in character to the data collected using traditional co-located, laboratory protocols.

Bradner, E. (2004). Keeping Your Distance: Remote usability testing or the lab?. User Experience Magazine, ().
Retrieved from http://uxpamagazine.org/keeping-your-distance/