Skip to content Skip to footer

Search Results for: diary studies – Page 5

What’s So Hard About Enterprise UX?: ERP Software Revisited

During the last twenty years, enterprise resource planning (ERP) software has become a standard fixture of the daily work environment of many professionals who work in large companies. While there are other types of “enterprise” software, this article will focus on ERP systems, a $6 billion a year industry. Many Fortune 1,000 companies have deployed ERP systems to support all types of transaction-oriented work, from accounting to sales functions. Smaller companies are now adopting this technology due to the efficiencies of the Software as a Service (SaaS) model that make it more affordable. This means that increasingly, workers spend their days sitting at keyboards and staring at monitors feeding data into some variant of an ERP system.

ERP’s Influence

Today’s ERP systems are extremely complex. One could argue they act as the central nervous system in our modern economy. Global multinationals are dependent on them. Without ERP, managers of corporate workers who are increasingly spread across the globe would be unable to coordinate the activities that are responsible for billions of dollars of commerce.

The spread of the Internet has enabled transactions between these systems, allowing the tentacles of these monster programs to reach into even the smallest companies and the daily lives of consumers. It is ERP systems that ultimately capture the transactions initiated by consumers as they click on e-commerce web pages, Kindles, iPhones, and the pervasive point-of-sale systems that have replaced cash registers.

Consider what it must be like to work day-after-day in front of a screen produced by an ERP vendor that occasionally spits out incomprehensible errors, or worse yet, loses your work. It’s definitely not an experience comparable to using an iPhone. Enlightened members of the analyst community have long known this, but the IT industry is finally maturing to the point where key stakeholders (not users) are starting to take notice.

Why is This so Hard?

As professionals, we know that framing the design problem—getting a shared vision of the design goals and constraints with all the project stakeholders, including the users—is one of the most important aspects of user-centered design. This is where designing enterprise software becomes more complex than designing consumer software.

Historically, one of the key challenges in enterprise software has been the number of stakeholders involved and their  conflicting agendas. This is the major barrier to improving the usability of ERP systems. It is not the UX professionals involved, or the lack of resources, but the nature of the problem itself. Let’s review the different stakeholders for ERP systems:

Individual Users

The individual users are the accountants, call center representatives, sales personnel, and countless others who record transaction-oriented data into ERP systems. In cases of self-service-oriented purchasing or expense reporting software, it could include anyone who is required to do those tasks in the company, not just the individuals in the purchasing or procurement functions. Unlike consumer software end users, they don’t have a say in purchase of ERP systems. They also don’t have a way of providing feedback to the vendors about user experience problems.

Functional Managers

The functional managers on the team perform specialized functions in the organization. Examples include managers of sales divisions and customer support organizations. It’s important to consider managers’ goals for ERP systems during design, such as providing consistent ways of combining sales forecasts or tracking support issues. This extends beyond what the individuals do with the UI to how that work is coordinated and measured. Functional managers occasionally have limited input into purchase decisions at companies. Unfortunately they almost never have a direct line of communication with vendors to discuss product enhancements.

Enterprise Executives

As the “E” in ERP implies, the systems are designed to serve enterprises and their executive management. Examples include aligning sales predictions with manufacturing plans or the related staffing and support budgets. ERP systems are designed to meet the needs of the executives who are the ultimate customers and decision makers regarding ERP purchasing. They have significant influence on ERP vendors, but typically this is channeled through individuals with IT responsibility, such as the CIO or CTO. One problem is that the enterprise executives rarely, if ever, actually use ERP systems. So while they hold the purse strings, letting executives select an ERP system is much like having your grandmother select your clothes for you.

Other Stakeholders

The other stakeholders include customers, business partners, and analysts that “guide” the customers of ERP systems. Since most of today’s ERP systems incorporate what is known as business-to-business (B2B) and business to customer (B2C) functionality, this also needs to be considered. Most modern ERP systems are used by both customers and business partners. For example, Amazon can tell you when your shipment will arrive, and how much shipping will cost using ERP. Ecosystem members have limited-to-no input in ERP purchase decisions. Analysts can significantly influence the decision makers, but rarely focus on user experience aspects of ERP.

Other Factors

Another factor contributing to poor ERP user experience is sheer complexity. ERP suites, which contain a broad range of functions, might have tens of thousands (if not hundreds of thousands), of pages or screens. Similar to the large corporations they serve, ERP systems are typically composed of loosely coupled functionally specialized modules. These specialized modules are only designed to be used in a part of a company, such as the call center, accounting, or human resources department.

As one would expect, the design of these modules reflect the philosophies of modern corporations. As such, ERP systems inherit both the strengths and weaknesses of this way of conducting business. One key weakness in both corporations and ERP suites is that their modularity makes them resistant to change.

ERP is software for corporations, designed by corporations. While this might sound like a good thing, consider that most large corporations suffer from poor cross-functional communications. This presents many barriers to good design.

User-centered design depends on regular, rich interaction with users throughout the development process. Better feedback loops result in better designs. Unfortunately, insufficient feedback from the user (note the use of “user” and not “customer”) is the norm in enterprise software today. Frequent interaction with users required for design iterations are rare compared to that in more consumer-oriented companies. The iterative design process for a package of soap at Proctor and Gamble is subject to much more user feedback than most ERP modules.

The ERP ecosystem is fertile ground for what former Microsoft COO Robert J. Herbold calls “the fiefdom syndrome.” Many players in the ecosystem are solving for their own short-term interests. Here are some of the classic maladaptive behaviors:

  • Most of us recognize that the end users of ERP systems would be the best source for many UI requirements. Unfortunately, these users face the conflict that they are under pressure to get their primary work done. This makes them hard to engage in the design process, even if you can navigate the organizational barriers necessary to reach them at all.
  • Managers of functional areas in large corporations often have limited recent hands-on experience with day-to-day transactional work. They are rarely the best source of information on how to design systems for their staff, but often they don’t realize this. These managers also don’t want day to day productivity of their teams impacted by IT initiatives like providing feedback on the design of ERP systems. Nor do they have the time or motivation to get involved in IT projects like ERP deployments; they simply aren’t rewarded for doing so.
  • IT departments in companies typically want to own the requirements for their ERP systems. Unless they have training and experience conducting user-centric research methods, they may not have the skills to do this work, at least at the level of sophistication found in consumer product companies. Making it worse, they are often discouraged by managers of functional areas when they do attempt to conduct any requirements analyses with end users. All too often, IT staff gets rewarded for introducing new technology, but not evaluated based on the impact this technology has on worker productivity. The net result is that corporate IT departments can become more of a barrier than a facilitator in efforts to gather user feedback to refine ERP systems.
  • IT consulting and professional services firms want to position themselves as experts to their customers. Often this means they fail to fully analyze the needs of each customer in detail, relying instead on their expertise. Rarely do they admit the need to conduct any type of user-centered requirements analysis. Doing so would require billing the customer for learning requirements for the ERP system, something they want to claim expertise in. Even if they do propose a detailed requirements analysis effort incorporating usability feedback, and have a staff skilled in doing so, this adds to the cost of the “scoping phase” of the engagement, which means it rarely gets approved. When requirements applicable across customers are discovered, this is often seen as an opportunity to create “custom solutions” for which each new customer is billed, rather than suggesting these enhancements to vendors.
  • Sales people at ERP vendors are rarely motivated to assist in engaging the customer in any deep level of requirements discussions. They are primarily motivated to close each deal quickly. Any ongoing discussions with customers are typically viewed as conflicting with sales goals. Sales people may also want to avoid any possibility that customers perceive the current product as deficient, because it may impact short term sales efforts.
  • Professional Services teams that are part of ERP vendor organizations are typically motivated on a project-by-project basis to gather requirements. Unfortunately, they may also be motivated to keep this information to themselves since this knowledge makes them more valuable. They may also develop “custom solutions” which they reuse unknown to the customer or the product teams.
  • Support organizations often have plenty of insight into what is not working with deployed products. Rarely are they consulted before a customer actually deploys the product. Typically, they are rewarded for “closing” an issue as quickly as possible. This often results in them not having time to participate in requirements gathering efforts. When they do identify product improvements, they often struggle to get improvements implemented, as these are typically seen by product management as less strategic than new functionality driven by marketing considerations.
  • Product management often has limited customer interaction due to the influence of the previously mentioned organizations. Even if they have significant domain knowledge, it often quickly becomes outdated and is limited to experience at a single company which may or may not represent the market as a whole. All too often, product management is incented to focus on feature enhancements rather than usability and simplification. Another problem that occasionally arises is that product management may want to “own” requirement decisions so much that they fail to facilitate an ongoing dialog between user experience specialists or product teams and the customers and end users. Even when motivated to do the right things, they may struggle to overcome the organizational barriers within both their own company and those of customers they try to engage with.
  • Development teams may not work closely with any of the above organizations. They are typically incented to deliver new functionality as quickly as possible and have little say in extending deadlines to address quality or user experience issues. In some cases they may not even work closely with product management, due to pressure to focus on completing development tasks on current projects.
  • Another contributing factor is the lack of shared vision on user experience or other best practices within vendor organizations. Due to the size of ERP projects, many vendors have specialized product teams focused on each functional module, working with limited oversight. This means modules created by teams often fail to integrate well, creating usability issues that impair efficient collaboration among a customer’s functional divisions and even design problems at the enterprise level. It also results in a new twist on the “it’s not my department” problem when it comes to the UI design.
  • When a product team member in a vendor organization, a user experience advocate in the partner or customer ecosystem, or an industry analyst tries to work with others to resolve user experience issues, the functional separation makes this difficult. The end result is that ERP systems often look like they were designed by developers using different requirements, instead of a consistent, unified system. Efforts within vendor companies, customers, and the ecosystem as a whole are often uncoordinated. Interacting with other stakeholders requires navigating an ecosystem filled with individuals and organizations that have conflicting priorities.

Recognizing Enterprise Software Is Different

Jeff Conklin recently wrote an article in Rotman Magazine advocating a “design approach” to solve so called “wicked problems.” Wicked problems:

  • Are complex problems that lack a clear definition agreed upon among the multiple stakeholders
  • Lack a binary-like state of success for any given point in time (only some type of optimal state)
  • Are unique enough to be unsolvable by commonly known methods
  • Require experimentation on many dimensions in order to make progress.

This sounds a lot like enterprise software to most of us who have worked on it.

What can we do?

We need to recognize that designing usable enterprise software is different. In the past, most enterprise companies have been attempting to apply methods derived from consumer product design without much modification. Just testing representative individual users in a usability lab is not sufficient.

As Jakob Nielsen mentioned in his November 2005 Alertbox newsletter column (www.useit.com/alertbox) “Enterprise Usability,” there are at actually three dimensions to cover:

  • Individual users
  • Groups of users
  • The enterprise

Standard lab testing misses group- and enterprise-level usability issues, as well as the impact of test data versus real data in these studies. User experience professionals working on ERP systems need to explore methods that overcome these problems.

Some Promising Directions

The Common Industry Format (CIF) effort was a significant step forward. It started a dialog among vendors and enterprises. However, almost ten years after it was proposed, most IT organizations and industry analysts remain largely ignorant about user research methods, including summative testing. According to the Standish Group, only 40 percent of IT organizations measure the success of the systems they deploy. ERP customers should be asking for CIF data, and analysts should be publishing reports discussing CIF study findings. Customers should be asking why vendors have not sent someone out to discuss studies measuring the usability of the latest updates to their ERP systems. Industry analysts should start asking about this kind of data, rather than acting as extensions of ERP marketing departments to push further investments in IT.

Another step in the right direction, vendors have started focusing more on ethnographic studies of enterprises that specifically focus on workgroup and enterprise level factors in addition to end-users. Studies of this type take time and planning, but they provide valuable data for designing ERP systems. Even more important than any particular design improvements identified in these studies is the shared context they help develop among stakeholders. Progress is being made, but until everyone in the ERP ecosystem is familiar and supportive of this type of work, there remains significant room for improvement. Success looks something like this: CIOs start asking vendors why they haven’t seen someone studying ERP use at their company, and asking about the findings of those studies.

The most promising trend is the increasing use of customer advisory programs. Such efforts address the lack of alignment among the stakeholders in the enterprise ecosystem, a key barrier to improving designs. Any increase in dialog will help create shared vision of both the problem and how to make progress. While these efforts are often restricted to a small vocal set of customers, it is certainly a step in the right direction. Ideally these programs would be run as working groups, focusing on resolving issues identified by other methods including, but not limited to, low user satisfaction scores, low scores on CIF defined measures of deployed ERP systems, and usage data  collected from large numbers of customers.

What can usability professionals—many of whom work in customer and vendor organizations—do to make a difference? One way to help would be to prioritize the discussion of enterprise user experience within our profession. We can also highlight the efforts of vendors as well as user experience professionals who work for forward-thinking ERP customers.

Usability professionals might also consider starting an outreach program targeting executives who make IT purchasing decisions with the goal of educating them in regards to the impact of poor usability on worker satisfaction, organizational efficiency, and more importantly, corporate profit margins. If we can convince them that user experience knowledge can impact their business, they will listen.

Older Users Online: WAI Guidelines Address the Web Experiences of Older Users

Age matters with wine and cheese, and also with the user experience of older people on the Web. As we age, we experience increasing impairments that impact how we interact with computers and websites. This article provides a peek into some of these issues, and points to existing solutions for making websites accessible to older people, along with people with disabilities.

The Web Accessibility Initiative: Ageing Education and Harmonisation (WAI-AGE) Project aims to promote education and harmonization on the accessibility needs of older users. It includes an extensive literature review to learn about their requirements and will better explain W3C Web Accessibility Initiative (WAI) educational resources to web designers, developers, and older users. WAI-AGE is a European Commission-funded project of the World Wide Web Consortium (W3C) Web Accessibility Initiative (WAI).

An Aging Population

Compared with any other period in human history, the next few decades will see an unparalleled growth in the number of elderly people. The United Nations estimates that by 2050 one out of every five people will be over 60 years of age; in some countries the proportion will be even higher. It is estimated that the European Union will experience a demographic shift from the year 2000 when 15.7% of the population was over 64, to an estimated older population of 17.6% in 2010 and 20.7% by 2020. In Japan, the change is even more dramatic with 20% of the population already over 65 in 2005, and forecast to increase to 27% by 2015.

Age-Related Impairments Affect Internet Use

With increases in age, often comes increased visual, auditory, physical, and cognitive impairments. The Royal National Institute for the Blind in the UK has estimated the number of older people in the UK affected by declining eyesight (significantly affecting daily living) as follows: 65-74 years: 15.8%; 75-84 years: 18.7%; 85+ years: 45.8%. Some specific declines include:

  • Changes in color perception and sensitivity (often making darker blues and black indistinguishable)
  • Pupil shrinkage (resulting in the need for more light)
  • Contrast sensitivity
  • Decreasing ability to focus on near tasks with a loss of peripheral vision

All of this impacts the ability to view a web page, perceive the information displayed, and notice small changes that may be applied as a result of selecting certain links.

With the increasing amount of audio and video on web-based news sites, entertainment sites, and social networking sites (such as YouTube), hearing loss can significantly affect an older personís access to this type of material if alternatives are not provided. Estimated percentages of the older UK population who experience moderate to profound deafness are: 61-80 years: 18.8%; 81+ years: 74.7%.

Arthritis, which affects 50% of Americans and Australians over 65, along with Parkinsonís disease, are primary physical debilitators of older people. Both arthritis and Parkinsonís are likely to cause difficulties using a mouse or other pointing devices, as well as using the keyboard.

There are many different types of cognitive deficits. Among older people, dementia, including Alzheimerís disease, appears to be the most common. Prevalence rates of dementia with age are estimated at 1.4% for those between 65-69 years, rising to 23.6% for those over 85 years.

Many older adults may not experience dementia or Alzheimerís, but might experience mild cognitive impairment (MCI) or subjective memory loss. Common experiences associated with MCI include:

  • Trouble remembering the names of people met recently
  • Trouble remembering the flow of a conversation
  • An increased tendency to misplace things

These factors likely affect how people use websites. For example, it may be difficult for some older people to understand the navigation design of websites, or to remember specifics about how to operate different user interfaces.

One of the issues with age-related functional impairment is that older people are likely to develop multiple impairments. Twenty percent of Americans over 70 reported dual sensory impairment. High levels of dual impairment are shown to increase the risk of difficulty with the “instrumental activities of daily living.”

As these impairments generally develop slowly, they are often not recognized as disabilities. Furthermore, many older people do not want to acknowledge the ageing process, and deny or disguise any functional or sensory impairment. In Australia, over half the population aged 60 years and over has a disability.

There is a false perception among many people that older people are not online. This is a myth: older people are the fastest growing web demographic. In the UK, recent surveys indicate that 30% of all those over 65 have used the Internet, up from 18% in 2006. With more reasons to be online, from communication to government to shopping to banking, we expect this proportion to continue growing.

Requirements for Elderly Users Identified From Literature

The W3C WAI has completed an extensive review of previous studies of older people online, and particularly the requirements for web design that would enhance their ability to use the Web. We reviewed the following types of literature:

  • Discussion of the general functional and sensory limitations often experienced as part of the ageing process
  • Collections of broad recommendations for making websites more accommodating for older users
  • Studies focused on the impact on web use of particular limitations experienced by older users
  • Studies looking at specific design aspects of websites, or specific types of sites and the general impact on older users

After reviewing this wide range of literature that considered age-related functional impairments and issues facing older web users, we are able to make some general observations about web experience of older users in these studies.

1. Information overload was one of the most common problems identified for older users. Particularly problematic is:

  • Too much material on the page, making it harder to focus on relevant material
  • Advertisements and movement distracting the users from their goals
  • Hypertext navigation providing nonlinear paths through the information
  • Different layouts, navigation structures, and interaction between sites

2. The experiential requirements for older users, rather than the technical aspects of sites, featured heavily among the recommendations:

  • Content and presentation-related aspects of the Webósuch as color, contrast, and spacingóreceived the most emphasis from the authors reviewed
  • Navigation-related issues, such as broad versus deep menu structures and the many ways in which links are portrayed, received significant emphasis from many authors

Many aspects of good usability emerged in the recommendations; they were not specific to older users, but are general usability principles for all users.

3. Web inexperience is currently an influencing factor in many studies and received a lot of discussion. When inexperience is combined with functional impairments, the combination can be overwhelming for some users. Inexperience will be significantly diminished as a major factor over time as additional older people gain access to the Web, and build experience. Additionally, many of the “younger older” users have been using the Web for years. However, new web applications and uses may create a new form of inexperience as the Web continues to evolve.

Some gaps were identified in the studies we reviewed including:

  • Hearing loss and deafness were clearly identified as a common sensory loss associated with ageing. However, these were not covered by the collected recommendations for meeting the needs of older users on the Web, nor in the reviewed research on age-related web use.
  • Existing web accessibility guidelines for people with disabilities were not acknowledged or discussed in most of the broad sets of recommendations for designing web pages to meet the needs of older users, nor were they discussed in much of the scientific literature.
  • In fact, we observed not only a lack of knowledge/acknowledgment of web accessibility guidelines, but a strong tendency to ìreinvent the wheel.î
  • Assistive technologies or adaptive strategies (and associated requirements) that might help accommodate impairments were seldom mentioned, possibly reflecting the fact that age-related impairments are seldom considered disabilities, just signs of “getting old.”

WAI Guidelines Cover Web Experiences of Older Users

After collating the requirements of older users identified during the literature review, we compared these needs with the accessibility guidelines developed by WAI:

  • WCAG-Web Content Accessibility Guidelines
  • ATAG-Authoring Tool Accessibility Guidelines
  • UAAG-User Agent Accessibility Guidelines (for browsers and media players)

Our comparative analysis showed that the majority of the presentation and navigation-related needs identified in the literature, such as contrast, text size, line spacing, and link identification, were covered within WCAG 2.0, along with the identified technical requirements such as the use of CSS. Some of the other needs, such as simplified interfaces, could be met by browser modifications.

WAI guidelines actually cover a broader range of age-related impairments and their impact on web use than the literature. Specifically, WAI includes the needs of people with hearing difficulties, a significant issue for older people. It also provides solutions so that non-mouse access to web pages are properly supported, which should assist older people with arthritis and other dexterity-related impairments.

Improving the Understanding of Users and Designers

To address some of these gaps in knowledge and awareness, and to better help designers meet the needs of the increasing number of older users, the WAI-AGE project is revising some of the existing WAI documents, such as the involvement of users with disabilities in the design process, and the business case for web accessibility. Involving older people and people with disabilities throughout the design and development process is essential to the understanding and full incorporation of their needs.

Additionally, some new documents are planned, including one to assist designers and developers to understand the relationship of WCAG 2.0 to older users’ requirements. Another document will help older users and their teachers better understand how to adapt the browser to accommodate impairments, and what assistive technologies may be of benefit.

We hope that the results of this project, including the literature review itself, will help researchers target new areas that still need investigation with respect to use of the Web by older people, such as hearing decline and cognitive issues. We also hope that the project will help researchers gain a more comprehensive understanding of WAIís work and build on it further as some of the needs of older users are better understood.

Conclusions

The literature review undertaken by the WAI-AGE project identified significant overlap between the accessibility needs of older users and people with disabilities. It also determined that WCAG 2.0 meets most of the identified requirements of older web users. However, there is an ongoing fragmentation and redevelopment of new standards rather than adoption of existing ones. One of the aims of this project is to connect the two communities and the associated research and investigation to encourage further development and support of existing accessibility standards.

The literature review also shows a need for more research in specific areas of internet access by the elderly-for example, age-related hearing loss, navigation styles and preferences, and the use of adaptive strategies for web browsing.

The project invites participation by experts, trainers, researchers, and users interested in promoting accessible web solutions. The preparation of pages that meet the needs of older users, as well as people with disabilities, would be much easier to achieve if authoring tools, including content management systems (CMS) and blogging software, conformed to ATAG and helped with the generation of accessible websites. With the propensity to work longer, ATAG conforming authoring tools would also enable better working environments for older people.

Getting Your Money Back: The ROI of remote unmoderated user research

Remote unmoderated user experience research is a valu­able methodology to include in any consultant’s toolkit. The ability to conduct task-based research that collects both qualitative and quantitative data enhances return on investment (ROI) significantly. This approach can also be used to augment other research methodologies.

This methodology synthesizes clickstream data with attitudinal and task success data, all taken from the same people at the same time as they interact with a website. Data regarding user intent are correlated with information regarding how they go about achieving their goals, and why the experi­ence does or does not work well. Not only does this methodology provide an increased ROI for the cost of conducting a study, but customer experience researchers can use it to investigate user behaviors at every stage of the website life cycle.

Target website with survey ask bubble
Targeting users with intercepts.

An Effective Research Method

Intercepting visitors to a website and inviting them to participate in a remote usability study is arguably the most effective way to understand visitors’ goals and intentions. Many companies assume that they know why people come to their site, but when they ask real visitors, the responses may come as a surprise. Other companies don’t understand why people came to their site and build their site solely from a busi­ness perspective.

A large sample size gives business stakehold­ers more confidence in the results of research studies focusing on the overall health and success of a website or prototype design. With increased confidence in the research data, the results can be used to inform the design process by helping website development teams prioritize changes that need to be made to the site design.

Remote unmoderated user research is an effec­tive method providing many benefits (Table 1).

Research Benefit Details
Increase geographic reach
  • Recruit participants from around the country and world
  • Evaluate multiple sites simultaneously: multiple country sites, competitor sites or brand sites such as the Hilton and Starwood Hotels
Provide a local language experience for participants during the study
  • Present system messages, buttons, and other UI elements in local languages • Translate task scenarios and questions •  Evaluate localized websites
Eliminate moderator bias
  • Manage the participant flow and questioning with conditional logic, rather than moderator influence and intervention
Provide a natural environment
  • Participants use their own machines at a time that is convenient for them, and experiencing their own internet connection speeds
Acquire research data in a timely manner
  • Evaluate multiple sites or locations simultaneously • Test prototypes and iterate quickly
    Scrum/agile development teams use this method because researchers can obtain feedback from a sample of participants, pause the study, obtain insights, iterate the designs, resume the study, and repeat the process until the team is satisfied with the design.
Collect large samples
  • Samples sizes of 200-400 provide statistical significance
  • Screening participants so they meet the profile and set quota requirements and study perform­ance criteria-such as minimum task browse time and minimum number of characters for the freeform responses—to increase confidence intervals
Obtain comparable results to in-person studies
  • Synthesize behavioral with attitudinal information, allowing researchers to uncover the why behind someone’s behavior
  • Directed tasks provide context for the behavior to help understand the why behind behavior
Obtain data from actual site visitors performing their real tasks while they perform them
  • Understand the true intent and experience of real site visitors
  • Learn why people visit a site, who they are, where they go, what they do, whether or not they are successful, satisfied, and much more.

 

A Cost Efficient Research Method

Remote user research is a very efficient way to conduct research.

  • Multiple sites and countries can be evaluated, simultaneously reducing cost and time commitments.
  • Both attitudinal and behavioral data are captured on significantly large sample sizes as compared to more traditional research.
  • Multiple research projects can be conducted simultaneously by one researcher versus needing a team of researchers.

The cost benefits for remote unmoderated user research are listed in Table 2.

Cost Benefit Details
Reduced costs
  • Teams of researchers do not have to travel to locations around the country or world.
  • The remote methodology eliminates lab facility costs and the cost of test computers used in a market research facility.
Fewer difficulties conducting the research
  • Eliminates the need to schedule times for participants to complete research
  • Eliminates the time and effort of travel or organizing local research teams
Easier (and faster) data preparation
  • Automatic capture of performance measures: time on task, time on page, page load time, navi­gation paths, and client-side actions (hover-overs, text change, clicks, and scrolling)
  • Participants type in their own responses to the questions or comments
  • The research tool aggregates the data and calculates correlations automatically
  • Individual and aggregated clickstream paths are prepared automatically
  • Data can be downloaded to a variety of file formats, such as PowerPoint, Word, Excel, SAS, SPSS

 

Research Results Provide Real Insights

Understanding users’ goals, attitudes, and behavior provides real insight into why they navigate in a particular way. By look­ing at the behavior of the users through the clickstream data, you gain insights into their attitudes. These insights lead directly to improvements in the design of the site.

For example, a drop-off analysis was conducted on a booking process funnel. While the percentage of people that drop­off at each stage in the process can also be obtained by web analytics tools, the designer needs to understand why the users are dissatisfied or having problems. Remote, task-based studies tell you why they drop off—you can see exactly which people drop off, and at the same time know why they drop off. As a result, you have greater confidence in the reliability and accuracy of the data.

World of Warcraft page with survey ask
Intercepts can match the style of the website.

Remote, unmoderated research is particularly effective for conducting “true intent” and “open” web research. Attitudinal and behav­ioral data are gathered from people naturally visiting a website for true intent research, just as market researchers outside a store can. This method helps us understand why visitors come to a website, whether or not they are successful and satisfied, what they actually do on the site, and what impact the site experience has on the brand perception. With large sample sizes of 400 or more, it is feasible to filter the data by visit intent.

In open web research, users must be able to freely and naturally explore the Internet. Researchers learn which sites are “top-of-mind,” which are most popular and why, how the research is approached, and how specific websites are found. Remote, unmoderated research provides the ability to trigger ques­tions when users move from one site to another, providing deeper insights into users’ motivations and behavior.

Patterns in user behavior and sites visit­ed emerge with large sample sizes of 400 or more in open web research. These pat­terns help researchers understand how users find specific sites (if at all), how the web can be used to develop awareness and preference for specific products and servic­es, and whether or not users naturally navigate to specific sites (If so, how and why? If not, why not?)

Companies use remote unmoderated research to conduct a variety of research, such as:

  • BRAND AND VALUE PROPOSITION-Are we promoting the brand and value proposition? Do visitors know what we do and who we are?
  • PROTOTYPE EVALUATION -Will the next site be successful? Will users find it compelling and helpful?
  • TRUE INTENT – Why do users come to the site, and how well does it deliver on those expectations?
  • USABILITY – What are the strengths, weaknesses, and opportunities to improve satisfaction and conversion on my site?
  • COMPETITIVE RESEARCH-How do we compare to the competition? What strategies will help us win?
  • INTERNATIONAL STUDIES -How are we performing in key markets internationally?
  • NAVIGATION AND ARCHITECTURE-How well does my site enable key task accomplishment?
  • OPEN WEB RESEARCH-How do users find our site? How can we use the web to develop awareness and preferences?
  • CARD SORTING STUDIES -What are users’ expectations of the information archi tecture? What category names should be
    used for navigation?
  • PERSONA DEVELOPMENT -Who is visiting our site? What psychographics best match our target users?

Remote, unmoderated, task-based, qualita­tive, quantitative user research is an excellent addition to any consultant’s toolkit. It does not replace other methods, but with a larger sam­ple size, geographic reach, globalization, and ability to quickly run studies it adds value on it own merits

Becoming a UX Researcher in the Games Industry

Why User Experience Research Is Growing as a Career in Games

The games industry is always transforming due to changes in player demographics and business models. The introduction of new technologies compels developers to create new kinds of games interactions and experiences. For example, as new publishing and distribution models like Free-to-Play (F2P) and Games as a Service (GaaS) gain popularity, the player experience and long-term engagement have become increasingly crucial factors for achieving commercial success. Given the constant transformation, the value of User Experience researchers (UXR) in the games industry is more prominent, creating exciting new career prospects.

This shift toward establishing UX research-focused roles is noticeable in the growing number of industry organizations and events focusing on this domain. For instance, in 2009, the International Game Developers Association (IGDA) established a dedicated Special Interest Group (SIG) for Games User Research (GUR), which was later renamed Games Research and User Experience. The SIG runs annual summits, mentoring programs, and hosts an active Discord Community. The Game Developers Conference (GDC) responded to this shift by introducing a conference track dedicated to Games UX in 2017.

Becoming a UX Researcher in the games industry can be an attractive career path. It offers a unique opportunity to merge one’s passion for gaming and research into a profession that can have considerable impact. Games UX Researchers advocate for players and improve their gameplay experiences based on research data and insights. They work closely with game developers during the development process to ensure that the final product creates a fun and engaging experience for players and meets designers’ experience objectives. This role allows UX Researchers to witness how their research contributions and recommendations translate into better player experiences, which can be incredibly satisfying. A notable sense of fulfillment comes from contributing to the success of games that many players enjoy.

The games industry is at the forefront of innovation and creativity, where technological advancements and artistic expressions are combined to create new experiences. Games UX Researchers are well-positioned to take advantage of this by constantly adapting their methods and techniques to make the most of new opportunities and match the evolving gaming industry. This adaptability can keep the job fresh and exciting, providing new challenges and fostering a continuous learning environment. This adaptability also contributes to the intellectual aspect of the career as it is founded in understanding user behavior, psychology, interaction design, and product development. This combination can further offer personal intellectual satisfaction while building demand for this expertise in the job market.

Additionally, diverse career opportunities within the gaming industry add to the appeal of this career path. Games UX Researchers can choose to work with game development studios, publishers, firms specializing in UXR, or as freelance consultants offering their services to multiple companies. This diversity means they can follow a career path that best matches their personal and professional goals. Moreover, given research skills are highly transferable, Games UX Researchers can relatively easily change career domains and apply their knowledge to other fields of interactive systems to gain diverse career prospects.

The Skills Required for Games UXR

Games UXR allows people to combine a professional research skill set with a field they feel passionate about. One of the most crucial skills that games UX Researchers regularly demonstrate is understanding how to design, run, and analyze reliable user research studies that address the questions that emerge when developing games.

Some research objectives that game development needs to answer include discovering if experiences are fun, how to optimize that fun, and how to teach game mechanics effectively. Table 1 highlights key differences between games and other interactive systems that impact games’ UXR objectives and approaches (for more details, you can read Getting Ahead of the Game: Challenges and Methods in Games User Research,

a previous UXPA article on this topic).

Table 1. Differences between games and other applications. (Adapted from Pagulayan’s “User-Centered Design in Games” in Human-Computer Interaction Handbook (2003).)

Games vs. ApplicationsExamples
Process vs. resultsThe purpose of gaming is usually in the process of playing, not in the final result.
Imposing constraints vs. removing or structuring constraintsGame designers intentionally embed constraints into the game loop, but productivity apps aim to minimize constraints.
Defining goals vs. importing goalsGames (or gamers) usually define their own goals or how to reach a game’s goal. However, in productivity applications, the goals are usually defined by external factors.
Few alternatives vs. many alternativesGames are encouraged to support alternative choices to reach the overall goal, whereas choices are usually limited in productivity applications.
Functionality vs. moodProductivity applications are built around functionality, but games set out to create mood (for example, using sound or music to set a tone).

Game development offers a unique environment to deploy a research skillset, so researchers are required to understand the medium of games. This is not only about knowing how to have conversations with players but also about understanding the game development process and disciplines involved. The process of making games has developed separately from other software development approaches, and many of the product processes, terms, and disciplines will be unfamiliar to someone from another industry. Specifically, a researcher interacts with producers, game designers, insight professionals, and UX designers. Understanding how games are made and who makes them is necessary to be an effective games UX Researcher.

Finally, communication is a core skill for games UX Researchers. Studies uncover opportunities and problems that stretch across the whole company, so a multidisciplinary effort with designers, producers, and developers involving the whole team is required to prioritize and fix issues. Communication skills, confidence in giving presentations, and relationship-building skills with colleagues are pivotal. Ensuring that research findings can be understood—that findings feel relevant and important to a broad range of colleagues—is essential for success.

Getting into the Games Industry

Because it’s an industry people feel passionate about, available roles can be infrequent. There is a lot of competition for games UX Research roles when they open. This means that candidates must demonstrate their mastery of the core skills to stand out.

To develop these core research skills, many people come into the field with post-graduate study (master’s and PhD levels). In the IGDA’s 2019/2022 survey, 24% of people working in games UXR had a PhD, most commonly from psychology, HCI, or neuroscience backgrounds. However, academia isn’t the only way to get and demonstrate study design experience. Hiring managers are often open to applicants who already work in UXR in other fields that have given them experience in qualitative and quantitative research. Working with hobbyist game developers to gain experience applying user research methods to games will help build confidence that you’re ready to apply your skills to games.

In smaller teams without a dedicated UXR role, designers, producers, and quality assurance managers often run studies referred to as playtests. With support from a mentor, courses, or the wider research community, one can get the experience necessary in designing and running studies at a professional level. 

For people entering the games industry, hiring managers will want evidence that candidates understand how games are made and that they can have constructive conversations with colleagues from other disciplines. This can be a particular challenge when trying to join the industry at a senior level because candidates are expected to be able to represent the games and UX disciplines immediately. Luckily, some great books and talks introduce game design and development, which will help aspiring games UX Researchers understand the domain. A Playful Production Process, by Richard Lemarchand, the former lead designer of Uncharted™, describes their approach to developing their hit games and how UX research fits in. The Game Designer’s Playbook by Samantha Stahlke and Pejman Mirza-Babaei, also gives helpful context on designing fun interactions for games.

As mentioned above, games UXR has an active community, including conferences and newsletters. Finding a network on social media or Discord™ and following industry discussions may help make integration into the games industry easier. The IGDA’s Games Research and UX Discord community is a great place to get started alongside curated lists of games researchers on X (Twitter).

The Changing Role of a UXR

Similar to other careers, games UXR offers the potential to develop your skillset as you advance. At some companies, the most junior roles are focused on moderation and execution of studies. They can be the team member who spends the most time face to face with players while asking questions, administering surveys, and observing studies. This is frequently done in partnership with a more experienced researcher who has designed the study. This role can create opportunities to develop study designs and analysis skills and understand how to apply them in the industry.

Mid-level researchers are typically trusted with the end-to-end development, execution, and debriefing of a study, including working with a development team to confirm a study’s focus, deploying a range of research methods to gather reliable data, and drawing that together into a clear and compelling debrief.

As a researcher progresses in seniority, relationship building and proactively advocating for the discipline becomes important. Senior researchers are expected to help teams unfamiliar with user research determine what’s important to test and how. Senior researchers can often expect to be the sole representative of their discipline in a team, so persuasion, communication, and influence become increasingly important. It can be a very rewarding career for people who are interested in using their research skills to understand people (not just users, but also their colleagues).

As a researcher develops further, they can often decide whether to focus on coordination and people management of other researchers or to take a principal route that focuses on deep skill development and becoming the go-to person for a specific method or focus area. Focus areas include accessibility in games, player trust and safety, particularly in online multiplayer games, platform-focused areas such as virtual and mixed reality (VR/MR), or techniques such as reliably measuring attitudes or retention over time.

A Career on Hard Mode?

Despite the rewards, game development can be a demanding career, and it’s not uncommon for people to leave the games industry after five to ten years.

Depending on the team you’re working with, some roles can be repetitive. In a single studio, a researcher may work on the same title for many years, with similar study designs, which can become repetitive. A lot of UXR for games is focused on evaluative methods, such as usability testing or gathering ratings, which can offer limited opportunities to apply a wider research skill set.

Like many industries, games UXR lacks recognition at senior levels, and roles beyond the director level are currently uncommon. Some people move to adjacent roles, such as design or production, to continue developing their careers.

Game development is also known as an unstable environment, with layoffs common when games fail to hit critical success (or even after a hit game, due to poor planning!). Unless a researcher is based at a large established publisher, they may have to change roles unexpectedly or relocate for a new position, which can be difficult to balance with care responsibilities or a family. Many game professionals decide to seek more stability later in their career and eventually leave games.

Because games is a passion industry, wages can often be lower than equivalent roles in other industries such as tech and finance. This can become increasingly challenging in high cost-of-living areas. Ultimately, working in games isn’t always fun. It is an industry that players (and colleagues) have a deep passion for, and they are often deeply invested in their craft and creating a positive experience for players. For many people, especially earlier in their career, this is an attractive offer!

The Future Trends in Games UXR

As discussed, technological and business advances have constantly reshaped the games industry, allowing developers to make novel forms of interactions and experiences. These advances also pose challenges and opportunities for games UXR; for example, researchers may be the first to evaluate new interactive experiences and their social implications to further our understanding of humans and play. In this section, we will highlight some of the technological trends currently shaping the future of UXR in games.

Development of Novel Interaction Methods

We’ve come a long way from gameplay experiences that were only focused on a single-player using standard devices like a mouse, keyboard, and screen. Nowadays, online gaming platforms and streaming services have made large-scale interactions between massive groups of players commonplace. Technologies such as virtual and mixed reality (VR/MR) headsets and the widespread use of augmented reality (AR) in smartphones enable compelling, immersive experiences accessible to many people. Furthermore, cutting-edge technologies are pushing the boundaries of interaction design by exploring sensory channels beyond sight and hearing, including touch and smell. This expansion is opening new possibilities for games.

Advances in User Data Collection

As business models trend away from traditional boxed releases toward games that develop long-term relationships with players, game developers and UX Researchers are collecting more data related to player behaviors, preferences, and game performance metrics. This data allows for creating highly detailed profiles of individual players, including factors like purchase history, play session durations, in-game actions such as combat style, and time taken to solve puzzles. It also enables UX analysis of larger player populations and answering high-value business problems, such as optimizing player retention and in-game item sales.

Automation and Artificial Intelligence (AI)

Game development can be a very time-sensitive environment, and development has traditionally heavily relied on skilled human labor for tasks like game creation and UX evaluation. Procedural content generation (PCG), which uses automation to assist in generating game content, has been a well-established practice in the industry. Nowadays, content creation is evolving further as AI technology becomes more sophisticated. Games UX Researchers are exploring whether AI technology can help aid game testing and data analysis.

Opportunities to Learn More and Develop

The demand for games UXR is rising as research plays a pivotal role in helping developers achieve their player experience goals. Yet, many game studios, particularly smaller teams, lack dedicated UXR personnel or existing research staff and cannot meet the research demands. Maturing and scaling research in the gaming industry continues to be important.

While expanding a research team may not always be feasible due to budget constraints and prioritizing other development talent, there’s an alternative solution: democratizing research. This approach involves empowering and educating non-research team members to conduct research effectively. However, it needs support, such as covering educational costs, incentivizing learning initiatives, providing access to learning resources, and offering mentorship. It’s important to note that there are associated risks, such as maintaining research validity and concerns about the impact on established research teams. Deciding which projects should involve the research team and which can be handled by non-researchers by providing the necessary resources is essential for success.

Further reading might include the Games User Research book, which offers an extensive collection of insights and best practices from over a dozen games UXR experts. The book covers topics such as planning user research, obtaining actionable insights from research, and determining the most suitable methods for various scenarios. Steve Bromley’s book How to Be a Games User Researcher applies lessons from running the International Game Developer Association’s Games UX research mentoring scheme to help people start their career. It covers research methods, game development, and career tips. His website, https://gamesuserresearch.com/, offers further career guidance and help, including a free book sharing the secrets of games research hiring managers.

Total Recall: The Consequence of Ignoring Medical Device Usability

A nervous thirteen-year-old girl sat in a pre-operative room. As she spoke with the anesthesiologist about the impending knee surgery, a nurse came by with a clipboard, requiring that the eighth grader confirm she needed surgery on her left knee. She checked the box “Left,” and signed her name. Then the nurse handed her a giant permanent marker and asked the young patient to label her “good” leg, “NOT THIS ONE” and “NO SURGERY HERE.” Her finishing touch was a large, “NO!” on the top of her right kneecap (see Figure 1). The girl nervously laughed about why they might require her to do this.

Drawing of a patient's leg with No! No surgery here, no, no, no scrawled all over it in red marker.
Figure 1. n hospitals, patients are sometimes asked to label the part of their body where the doctor will perform surgery, or, in this case, they are asked to label the part of their body where the doctor should not operate.

Perhaps you have heard stories of a doctor operating on or amputating the wrong limb. Even though this is an age-old problem, some medical devices cause the user to confuse the sides of the body, consequently leading to recalls of the devices. Just last year, the U.S. Food and Drug Administration (FDA) recalled a software system because the interface led doctors to confuse the left and right sides of the brain when evaluating patients (see Figure 2). Imagine the consequences of this design flaw during brain surgery!

Left and right profiles of a human head labeled Left and Right.
Figure 2. This depiction of brain hemispheres is inherently confusing. The person’s right side is labeled “left” and vice-versa.

In 2008, the FDA recalled an ultrasound system because the graphics made users misunderstand the image orientations of the patient’s left and right sides. The users made the assumption that the patient’s right and left sides were oriented in the same direction as the transducer, but this assumption was incorrect. Usability testing reveals the assumptions that users make, which can prevent designers from overlooking medical device characteristics that may lead to a higher risk for users, and ultimately, a recall.

Unfortunately, evaluating the wrong side of a body is just one example among many usability issues challenging medical devices. In the medical field, problems with usability are referred to as “use errors” instead of “user errors” to defer blame away from the user. It is not the user’s fault when the interface sets them up for failure.

Use errors are defined as “something that the user does or fails to do that results in an outcome that the user or manufacturer does not expect.” Use errors can be reasonably predicted to occur. Their risk of happening can be minimized through contextual inquiry, risk analysis, and usability testing.

Users of medical devices fall into five general categories: the general public (patients and home caregivers), healthcare providers (clinicians, including physicians and nurses), laboratory and biomedical technicians, imaging specialists (such as radiologists), and pharmacy staff. Each user profile tends to have its own set of challenges for medical device development. Usability research can aid in predicting the use errors common within each user group and inform design.

Medical Devices for the Home

Use Errors Due to Displays

One glucose meter was recalled because its users incorrectly interpreted the numbers on the display. When it showed “2.2,” the square decimal point was so small and so close to the first “2” that users read the number as “22” (see Figure 3). As a result, the users thought their blood glucose levels were ten times higher than they really were. Patients would mistakenly believe they needed to adjust their insulin injections to a much higher level than necessary. The consequences of this mistake could range from severe hypoglycemia to diabetic coma or death.

A screen with exceedingly large digits two and two, and a very small decimal point between the digits.
Figure 3. This glucose meter screen makes it difficult to see the decimal point and may lead users to believe their blood glucose level is ten times higher than it actually is.

In another case, the FDA issued an industry-wide recall of glucose meters that led to similar consequences for patients. The meters allowed users to choose between units of measure during setup. This feature allowed the meter to be marketed in both the U.S. and Europe. Additionally, it accommodated users who travel and regularly visit doctors in multiple countries. In the United States, insulin units are displayed in mg/dL, but in Europe and other parts of the world, units are displayed in mmol/L. The units differ by a factor of eighteen (18), which could lead to patients mistaking the units and taking incorrect therapy actions.

Users originally toggled through different modes when setting up the time and units of the device. When they set the clock or changed it for daylight savings, they would sometimes inadvertently change the units of measure from mmol/L to mg/dL or vice versa. Many times, users did not realize they had changed the units of measure because the units were not always prominently displayed.

In response to this use error, the FDA required the manufacturers to remove the capability of changing units on all glucose meters. Now travelers are inconvenienced by having to own two meters, and suppliers must manage additional inventory, but the prospect of serious consequences is diminished.

In the case of glucose meters and many other medical devices used by patients at home, design teams should confirm that users understand labeling, warnings, and instructions for use. This can be accomplished through comprehension studies in which users rely on instructions, labels, or the device itself in order to use the device.

Use Errors Due to Ergonomics

Failure to test the usability of a device can lead to poor ergonomic design—another common cause of recalls of medical devices. In 2008, the FDA recalled a wheelchair that had the potential to pinch a user’s fingers between seat bars when the user was opening the wheelchair, resulting in injuries as severe as fracture and severing.

To prevent the hazard of wheelchairs clipping fingers, a design team should create a detailed list of tasks associated with using a wheelchair and steps within each task. They should then consider each way in which a task could be performed incorrectly, and the consequence of skipping a step.
For medical devices, the FDA requires a formal risk analysis as part of the design process (see Figure 4). In a risk analysis, a cross-functional team considers the probability of a use error occurring and harming the user. Then, they rate the severity of the resulting harm to create risk level ratings for each task. Tasks with the highest risk level are prioritized during the redesign process.

Sample chart.
Figure 4. This is an example of a Failure Modes and Effects Analysis (FMEA) for a hypothetical automatic external defibrillator. FMEA is used to evaluate the risk profiles of use errors.

Contextual inquiry is another useful tool for identifying potential use errors, since it reveals what users do during actual use. The design team must create a design that reduces the possibility of these use errors, and then ensure that the use errors have been minimized by performing an iterative set of usability tests.

Medical Devices in Hospitals

Use Errors Due to Misconnections

One hot topic in hospitals is medical device misconnections. There is often a maze of tubes, connectors, and cords surrounding patients in hospitals. Unfortunately, due to haste or momentary confusion, it’s not impossible to mistakenly connect a ventilator air supply tube to an IV line. The Association for the Advancement of Medical Instrumentation (AAMI) and ISO offers a standard for small bore connectors in healthcare (ISO 80369-1:2010). Currently, AAMI and ISO are developing enhanced universal standards for connectors used in medical devices to further minimize misconnections.

Correct practice of usability engineering dictates that mating parts should uniquely mate with their counterparts, with minimal possibility of an incorrect connection (see Figure 5). In 2010, a tool for cataract surgery was recalled because it was possible to mate two components incorrectly. This incorrect mating led to the generation of plastic dust during cataract surgery.

Illustration of two mating parts that are impossible to connect. Three prongs on one plug and two receptacles on the mating part.
Figure 5. Mating parts should uniquely mate with their counterparts with minimal possibility of an incorrect connection. The design in this picture prevents the user from making a misconnection.

Medical device manufacturers can prevent misconnections by performing usability studies in which target users (medical professionals) perform tasks in a simulated use environment, typically without instructions. The results of the usability study would reflect what a doctor might do in the worst-case scenario—a situation in which the instructions are forgotten, ignored, or misplaced.

Manufacturers should attempt to foresee use errors such as misconnections by performing a thorough task analysis and determining all possible misconnections. They must take into consideration all other devices in the user’s environment as a routine part of risk prediction.

Use Errors due to Alarms

Another major concern in hospitals is alarms. Medical device alarms and signals must be understandable and audible in their use environments. Usability testing with healthcare providers can reveal whether an alarm is effective in its various modes (for example, lowest volume, visual only, and sound only). Designers must test users’ responses to alarms in a simulated use setting that replicates actual noise levels in the hospital room, along with the effect of having multiple personnel, and the distractions of many other medical devices alarming simultaneously.

A monitoring device was recently recalled because of an ineffective alarm system. Device makers prefer to err on the side of overly-sensitive alarms, which lead to false alarms. To prevent alarm fatigue, healthcare providers sometimes turn audible alarms down or off, which is what happened in this case. However, the visual indication alone was insufficient in notifying personnel of a critical situation (see Figure 6). A usability study would have shown whether or not the visual alarm was attention grabbing and action inspiring. If the visual alarm alone was ineffective, designers might have chosen to prevent nurses from muting it.

A poorly designed visual alarm where the warning is unclear owing to the excess numerical information surrounding the warning
Figure 6. Visual alarms and warnings need to be noticeable, understandable, and informative. This user interface is crowded with information and has poor color contrast. The “Warning!” message is difficult to see and uninformative. A usability study might reveal that this warning is not effective in inspiring prompt and meaningful action.

Logically, users are less likely to respond to a quiet, unobtrusive alarm. Designers need to decrease the possibility that a nurse will turn off an alarm by reducing the number of false-positive alarms emitted by a device. Likewise, the designers of adjacent devices should reduce the volume or intensity of less safety-critical alarms in relation to more safety-critical alarms. Standards are available for guiding design teams in the development of effective alarms, including International Electrotechnical Commission (IEC) 60601-1-8.

Hospitals offer diverse use environments with wide-ranging levels of noise and activity. Medical device designers need to ensure that alarms can be heard under circumstances of normal operation. In one scenario, an alarm on a fluoroscopic imaging system could be heard by the technicians in the control room but not by the doctors in the procedure room. Since doctors sometimes base split-second decisions on the output of the imaging system, the consequence of an unnoticed frozen screen or an otherwise misleading image (which they would have been informed of by the alarm), could be fatal. As a result, the imaging system was recalled.

An understanding of how users interact with their environments is critical not only for designing effective and safe alarms, but also for medical devices in general. Designers must use contextual inquiry to unveil important characteristics of use environments and subsequently integrate these characteristics in their usability studies.

In sum, usability engineering offers numerous tools to avoid recalls due to use errors. The following three steps contain the key to reducing medical device recalls:

  1. Perform contextual inquiry to gain insight into potential use errors and to identify characteristics of the use environments that must be simulated in usability studies.
  2. Evaluate a detailed list of tasks to determine which tasks are the most risky. Then, design the device to reduce or eliminate risk.
  3. Test user interfaces in usability studies with representative users. Include tasks necessary for normal operation, along with the most safety-critical tasks.

Usability is a cornerstone of safety; use errors are frequently predictable and avoidable through proper contextual inquiry, task analysis, risk prediction, risk reduction, and usability testing.在过去五年中,美国食品药品监督管理局(FDA)因不良可用性召回了 50 多种医疗设备。在医疗设备行业,不清晰的屏幕显示会导致糖尿病人昏迷或死亡;不符合人类工效的设计会导致手指被切断;部件误连接导致空气栓塞。这篇文章讨论近期因这些严重后果而引发的召回。

在本文中,您将了解如何未雨绸缪地确保医疗设备安全。首先,选用正确的表述,用“使用错误”来替代有埋怨用户意味的的“用户错误”。如果是界面导致用户出错,那就不是用户的错。其次,实境调查可初步发现“使用错误”的潜在风险,是开发安全医疗设备的重要出发点。再次,以风险级别对任务进行排序是大有必要的,可确保设备设计不会引发高风险“使用错误”。最后,FDA 要求制造商在真实实使用环境中通过用户代表进行可用性研究,来说明设备可以被安全使用。

The full article is available only in English.지난 5년 동안, 미국 식품의약국(FDA)은 사용하기 어렵다는 이유로 50개 이상의 의료기기를 회수하였습니다. 의료기기 업계에서 불분명한  정보제공은 당뇨병으로 인한 혼수 상태 혹은 사망으로, 열악한 인간공학적 디자인은 손가락 절단으로,  잘못 연결된 부분은 공기 색전증으로 까지 이르게 할 수 있습니다.  본 논문은 이러한 심각한 결과로 인한 최근의 기기 회수 문제에 대해 논의합니다.

본 논문에서는 의료기기의 안전한 사용을 사전에 어떻게 보장할 수 있는지를 알아볼 것입니다.  우선, 우리는 “사용자 오류” 대신에 “사용 오류”라고 말함으로써 사용자에게 비난을 돌리는 일을 삼가해야 합니다.  사용자 인터페이스 가 오류를 범하게끔 한 경우에는 사용자의 잘못이 아닙니다. 둘째, 정황 연구는 안전한 의료기기를 개발하고, 잠재적인 “사용 오류” 위험을 초기에 식별하기 위한 중요한 시작점이라 할 수 있습니다.  셋째로, 의료기기 디자인이 높은 “사용 오류” 위험을 조장하지 않도록 보장하기 위해서는 위험 수준 순서에 따른 업무 분류가 필요합니다.  마지막으로, FDA는 제조업체가 현실적인 사용 환경에서 대표 사용자그룹들이 참여하는 사용성 연구를 수행하여 기기 사용이 안전한지를 확인하도록 요구합니다.

The full article is available only in English.Nos últimos cinco anos, a Food and Drug Administration (FDA) fez o recall de mais de 50 dispositivos médicos porque eles não eram fáceis de usar. Na indústria de equipamentos médicos, monitores que não forem de fácil leitura podem levar ao coma diabético ou ao óbito; um projeto sem ergonomia pode levar a dedos amputados ; peças mal conectadas podem causar aeroembolismo. Este artigo discute os recentes recalls que foram feitos devido a essas graves consequências.

Neste artigo, você descobrirá como garantir antecipadamente o uso seguro de um dispositivo médico. Primeiro, devemos tirar a culpa do usuário usando o termo “erro de uso” em vez de “erro do usuário”. Não é culpa do usuário quando a interface do usuário o induz ao erro. Segundo, a pesquisa contextual é um importante ponto de partida para desenvolver um dispositivo médico seguro e inicialmente identificar possíveis riscos para “erros de uso”. Terceiro, é necessário ordenar as tarefas por nível de risco para garantir que o projeto do dispositivo não promova “erros de uso” de alto risco. Por fim, a FDA exige que os fabricantes realizem estudos de usabilidade com usuários representativos em um ambiente de uso realista para mostrar que é seguro usar o dispositivo.

O artigo completo está disponível somente em inglês.過去5年にわたり、米国食品医薬品局(FDA)は50以上の医療機器について、使用に適さないという理由からリコールを行ってきた。医療機器業界において、不明瞭な表示は糖尿病性昏睡や死亡を招くこともあり、人間工学面で良くないデザインのために指の動きが妨げられたり、誤って接続されたパーツが空気塞栓症の原因となることもある。この記事は、このような重大な結果によりリコールとなった最近の事例について議論する。

この論文では、前もって行える医療機器の安全な使い方に関する対策を提示する。第一に、「ユーザのエラー」ではなく「使用上のエラー」という認識にたち、ユーザに責任を押し付けることを控えなければならない。間違えてしまうようにユーザインタフェースが設定されていれば、エラーの発生はユーザの責任とは言えない。第二に、安全な医療機器の開発、および初期段階での潜在的な「使用上のエラー」のリスクを特定するには、文脈における質問法の使用が重要な第一歩となる。第三に、機器デザインを「使用上のエラー」のリスクが高いものにしないために、操作をリスクの程度に従ってランク分けする必要がある。最後に、FDAは、代表的なユーザの協力を得て、現実的な環境で機器が安全に使用できるかどうか、ユーザビリティ調査を行うよう、製造業者に義務付けている。

The full article is available only in English.Durante los últimos cinco años, la Administración de Alimentos y Medicamentos (Food and Drug Administration, FDA) ha retirado del mercado más de 50 dispositivos médicos porque no eran usables. En la industria de los dispositivos médicos, pantallas poco claras pueden conducir al coma diabético o la muerte, un mal diseño ergonómico puede derivar en dedos cortados, piezas mal conectadas pueden conllevar embolias gaseosas. En este artículo se analizan los recientes retiros del mercado debido a estas graves consecuencias.

En este artículo verá cómo garantizar anticipadamente el uso seguro de un dispositivo médico. En primer lugar, debemos eximir de culpa al usuario y hablar de un “error de uso” en lugar de un “error del usuario”. No es culpa del usuario que la interfaz de uso los predisponga a un error. En segundo lugar, la investigación contextual es un importante punto de partida para el desarrollo de un dispositivo médico seguro y para identificar inicialmente los posibles riesgos de “errores de uso”. En tercer lugar, es necesario priorizar tareas en función de su nivel de riesgo con el fin de garantizar que el diseño del dispositivo no promueva “errores de uso” de alto riesgo. Por último, la FDA exige que los fabricantes realicen estudios de usabilidad con usuarios representativos en un entorno de uso realista para demostrar que el dispositivo se puede utilizar en condiciones seguras.

La versión completa de este artículo está sólo disponible en inglés.

Engaging Study Observers: An Overlooked Step in User Research

At AutoTrader.com, we have learned the value of actively involving observers when conducting usability studies. We have always invited stakeholders, such as product and project managers, visual designers, and interaction designers, as well as individuals with periphery interests such as marketing researchers and QA analysts to observe usability studies. However, our practice has evolved to improve our approach and provide a more valuable experience for observers and our usability team alike. The goal of this evolution has been to ensure that observing usability sessions is an effective use of time for our colleagues while also inviting them to provide valuable input for our findings reports. After reading this article, our hope is that you will not only gain insight into the advantages of incorporating observers into your practice, but that you will learn how to do so effectively.

Seeing is Believing

Attending usability study sessions provides observers with a unique opportunity to watch and listen to users as they interact with products. Prior to a study, stakeholders often find it difficult to imagine or articulate issues that may be encountered by end users, and they find it extremely valuable to witness issues unfold. It is during sessions that they get to see their concerns validated or eliminated and new issues and ideas uncovered.

Two (Or More) Heads Are Better Than One

Immediately after study sessions, our team conducts a debriefing where observers have the opportunity to learn from, and discuss findings with, the usability team and fellow observers. The atmosphere is one of energetic collaboration, with a diverse group represented in an open forum. In addition to seeing and hearing users’ perspectives, stakeholders are given the opportunity to see their project through the eyes of other observers, which provides valuable insight. They leave with enhanced understanding of issues and sometimes even with solutions. As the adage “two heads are better than one” suggests, valuable collective discoveries emerge beyond what a single observer might see.

The Magic of Mutualism

Observers are not the only ones to benefit from their inclusion. Researchers also benefit in gaining insights beyond what may have been captured in their session notes and recordings. Discussions with observers help researchers gauge if there is consensus on an issue or, conversely, when an issue may be contentious. Observers sometimes use interesting terminology and phrasing researchers might not be aware of, and key stakeholders oftentimes illuminate the history of a project, along with business considerations and challenges they face. As we describe findings and offer recommendations in the findings report, we’ve discovered that learnings from observer comments can help us craft our research debriefs in a more persuasive manner.

As a direct result of our observer management process, we have seen an increased interest in the work we do from both project stakeholders who are well-versed in our practice, and from observers who had not previously been exposed to our work. This has increased awareness in how we can help projects, giving us the opportunity to undertake wider variety of research activities.

Watching users interact with products is no longer framed as an “if” or “maybe” aspect of a project timeline—it is now framed as “when” and “how.”

Establishing Best Practices

As you can see, the value of inviting others to observe usability sessions has been firmly established in our practice for quite some time. However, the day-to-day logistics of exactly how to include observers effectively has seen significant evolution in our practice. We have learned a number of lessons along the way.

Before the Study: What Doesn’t Work

It used to be that when inviting observers to study sessions, a general email about the study was sent and observers were asked to reply with which sessions they wanted to attend. Our team tracked responses, but formal calendar invites weren’t sent. As you can imagine, observer no-shows were common.

Additionally, when preparing the observation room prior to a study, we did not place note-taking materials at each observer seat. Rather, note-taking materials were placed in a small stack between seats and the test plan and participant demographic sheet were placed at each seat. In most cases, note-taking materials went unnoticed, while observers noticed the test plans and participant demographic sheets immediately.

What Does Work

Our process for inviting observers to study sessions now involves the following steps:

  1. Outlook calendar invites are created for each session of the study.
  2. Key stakeholders are automatically invited to each session two weeks prior.
  3. An email describing the study and welcoming observers is sent out to others who may be interested in the sessions.
  4. Outlook calendar invites are sent to individuals who responded to the email.

We allow up to ten observers per session, with executives, upper management, and key stakeholders given highest priority. We then fill the remaining observer slots with members of our department and other research teams on a first-come, first-served basis. Observer no-shows have decreased significantly since we started sending calendar invites.

Additionally, from our first contact with potential observers, we now establish and define the observer role so that participation expectations are set. In the invitation email, we state that observers are responsible for recording at least two findings and sharing them with the group after each session. The observer role is emphasized again when the calendar invite is sent to each individual.

When setting up the observation room, we now strategically stack observer materials at each seat in the following order from top to bottom: large Post-It pad with a Sharpie, a cover sheet that lists brief observer “rules,” the study test plan, and the participant demographics list (see Figure 1). We also pre-label a whiteboard that can be used to organize the observer Post-Its after each session.

ign with instructions for observers
Figure 1. Materials that are provided to each observer.

During Each Study Session: What Doesn’t Work

Initially, to engage observers during study sessions, we tried allowing them to write down anything they found interesting on Post-Its using a Sharpie. We did not give them clear instructions or set expectations as to how the notes would be used. In the end, observers wrote very little.

We also invited observers to be the official note-taker during sessions, but most were not interested. When there was interest, the quality of the notes suffered due to difficulty determining what information was appropriate to record and in what format.

At one point, we asked nothing of observers during sessions and did not provide them with a means to take notes. Consequently, observer engagement suffered.

In general, due to the lack of organization and clear instruction with the above methods, observers were not able to channel their thoughts in a way that was useful. As a result, they often engaged in conversations with one another, creating a noisy environment and making it difficult for the note-taker and other observers to hear participants.

What Does Work

Our current approach is much more structured. In the five minutes before each session, a usability team member verbally reminds observers that their participation is important. Specifically, we emphasize that the Post-Its provide a means for us to consider their input in study reports, and we describe how each observer will be given the opportunity to share their notes.

We also emphasize the purpose of the study and provide examples of findings that are and are not appropriate to capture during a usability study, focusing on observations of user behavior rather than user opinions.

Observers have been notably quieter as they actively and enthusiastically record their observations on the Post-Its, knowing they will have an opportunity to discuss the issues after the session. We have found that giving observers a specific task, providing them with materials to complete the task, and explaining why their contribution is important works wonders at keeping the noise level down and involvement up.

After Each Study Session: What Doesn’t Work

Initially, our session debriefs were short, casual, and unstructured, with some rough notes captured informally on a whiteboard. Sometimes observers turned in handwritten notes, but we didn’t collect them in an organized fashion. Observers often left sessions without key takeaways, or with incorrect takeaways, and missed out on hearing insights from fellow observers because very few observers participated.

To increase collaboration and participation, we formalized our debriefs and recorded organized notes on a whiteboard. Originally, we organized notes by participant and only recorded findings that were unique to a given session. Observers were noticeably quiet during the debriefing or engaged in side conversations since some may not have attended previous sessions and did not know what was unique about the session they attended. Additionally, the notes were of limited use to our team when creating the findings report since we group our formal findings by screen or topic and not by participant.

We eventually began labeling the white board by topic and recorded notes with frequency information next to each finding. This change helped when transferring the notes to the findings report, but it did not increase observer engagement. We inadvertently created awkward, frequent lulls in conversation when turning to the whiteboard to handwrite findings, and ended up talking amongst ourselves more than we would have liked. Observers relied heavily on memory as discussion progressed, sometimes resulting in misremembering an event or discussing issues that were most memorable, while overlooking or downplaying other issues.

What Does Work

We have now evolved into a more structured method that sets expectations around the preferred level of observer involvement and provides clear guidelines for observer participation. We set the stage at the beginning of debriefs by reminding observers that they play an important role, describing the purpose of the debrief and how long it will take, and reminding them that everyone will get to share their notes.

Each observer then shares their Post-It notes with the entire room and helps categorize them on the whiteboard (see Figure 2). Involving the observers in the notes-mapping process assists in keeping the conversation flowing. For example, we might ask an observer, “Where do you think this goes on the board?” If time is running out, we focus on a rough mapping during the debriefing and correct as necessary after the debriefing.

Photo of room with notes covering two walls
Figure 2. A labeled whiteboard with categorized observer notes.

We designate a “parking lot” section on the whiteboard so we can politely acknowledge notes that are off topic. As a general rule of thumb, we set aside at least thirty minutes for debriefs after each session, and debriefing does not end until all observers have shared their notes. At the end of each debriefing, or earlier if needed, we caution our observers not to make generalizations too early and without having observed multiple sessions.

The levels of enthusiasm and energy remain consistent throughout the debriefing as observers take ownership of the documented findings. Lulls in conversation are no longer an issue since we quickly categorize the Post-Its, and observers are eager to empty their hands of Post-Its. Interestingly, as an added bonus, observer debriefing has almost become observer led, requiring less work of our team than any of the prior methods we tried.

Conclusion

We take great pride in having created an atmosphere of transparency and collaboration between our usability team and observers of usability studies. Building this atmosphere has not been without its challenges, but we have successfully found ways to balance observer inclusion with efficient study goal accomplishment.

By openly and directly communicating with observers prior to studies, providing them clear instructions and materials during studies, and giving them an organized way to express themselves after study sessions, your practice will only improve. As your practice improves, never forget to express your appreciation frequently. After all, a rising tide lifts all boats.可用性研究计划应该包括观察员参与方式。实际案例研究表明,提前做一点准备工作有助于您更有效地帮助观察员履行职责。可以获得的好处包括:更好地展示您的工作、更深层次地讨论研究结果和提出的解决方案。了解如何向观察员传授可用性测试中的关注点,并确保每个人都有发言权。

The full article is available only in English.사용성 연구를 계획할때는 관찰자를 참여시키는 여러 가지 방법들을 생각해야 합니다. 한 가지 실제의 사례 연구를 통해서 미리 작업을 좀 해두는 것이 관찰자들을 더욱 효과적으로 참여시키는 데 도움이 될 수 있음을 보여줍니다. 장점으로는 작업의 가시성이 더 높아지며, 발견사항과 제안된 해결책에 대해  좀 더 깊은 논의가 가능하다는 것 등이 포함됩니다. 관찰자들에게 사용성 테스트에서 어떤 점들을 찾아봐야 하는지 가르치고, 모든 사용자가 자신의 의견을 말하도록 하는 방법을 배웁니다.

전체 기사는 영어로만 제공됩니다.A participação dos observadores é uma parte importante da realização de estudos de usabilidade. Um pouco de trabalho prévio pode ajudar a envolver os observadores de forma mais eficaz.

O planejamento de um estudo de usabilidade deve incluir maneiras de envolver os observadores. Um estudo de caso de uma empresa mostra que um pouco de trabalho prévio pode ajudá-lo a envolver os observadores de forma mais eficaz. As vantagens incluem melhor visibilidade para seu trabalho e uma discussão mais profunda sobre as descobertas e soluções propostas. Aprenda como ensinar aos observadores o que procurar em um teste de usabilidade e assegure-se de que todos possam se manifestar.

O artigo completo está disponível somente em inglês.ユーザビリティ活動の計画には、オブザーバーを参加させるための方法が含まれるべきだ。あるプラクティスのケーススタディによれば、事前のちょっとした準備がオブザーバーをより効果的に引き込むのに有効であるとのことである。自らの作業の可視性が向上し、見出された事柄や提案されるソリューションに関する掘り下げたディスカッションのできることなどが、その利点として挙げられる。ユーザビリティテストで観察すべきことをオブザーバーに教え、参加者は誰でも意見を持っているということを学ばせることが大切だ。

原文は英語だけになりますLa participación de observadores es un aspecto importante en el estudio de la usabilidad. Un poco de trabajo por anticipado puede promover una participación más efectiva de los observadores.

La planificación de un estudio de usabilidad debe incluir formas de involucrar a los observadores. Un estudio de casos de una práctica permite demostrar que un poco de trabajo por anticipado puede promover una participación más efectiva de los observadores. Las ventajas incluyen una mejor visibilidad en su labor y un análisis más exhaustivo de los resultados y las soluciones propuestas. Aprenda cómo enseñar a los observadores qué deben buscar en un examen de usabilidad y garantizar que todos puedan expresar su opinión.

La versión completa de este artículo está sólo disponible en inglés

A Moderated Debate: Comparing Lab and Remote Testing

Remote unmoderated testing  provides statistically significant results, which executives often say they require to make informed decisions about their web­sites. However, usability practitioners may question whether the large numbers, although seductive, really tell them what’s wrong and what’s right with a site.

Also, you can tell whether an observation is important or not by the number of people from a group of 200 to 1,000 participants who hit the problem. Of course you can’t observe par­ticipants’ difficulties and successes first hand. The lack of visibility might seem to be a serious disadvantage, but being unable to see partici­pants does not prevent you from recognizing success or failure. Online testing tools anticipate and prevent most of the problems you might expect. Most importantly, you don’t have to leave the lab (or your own experience) behind.

Practical Differences

Online and in-lab testing methods are differ­ent in what you can test, in how you find and qualify participants, and in terms of rewards.

What you can test:

  • Moderated – A prototype at any stage of development.
  • Unmoderated – Only web-based software or websites.

Finding participants:

  • Moderated – Use a marketing or recruiting company or a corporate mail or email list.
  • Unmoderated – Send invitations to a corporate email list, intercept people online, or use pre-qualified participants from online panels like Greenfield Online, Survey Sampling International, and e-Rewards Opinion Panel, or from the remote testing firms themselves.

Qualifying participants:

  • Moderated – Ask them; have them fill in questionnaires at the start of the session.
  • Unmoderated – Ask them qualifying ques­tions in a screener section and knock out anyone who doesn’t fit (age, geography,ownership of a particular type of car, etc.)Don’t let them retry—set a cookie.

Rewards:

  • Moderated – $50 cash, gifts. At $50 each,cash rewards for ten to twenty participants cost $500 to $1,000.
  • Unmoderated – $10 online gift certificates,coupons or credits, or raffles. At $10 each,cash rewards for 200 to 1,000 partici­pants cost $2,000 to $10,000.

What Isn’t Much Different

Although remote unmoderated testing requires very little human interaction during the tests, it does require a high level of expertise both for the setup and for the analysis at the end. The remote testing software provides data crunching and statistical analysis, but neither scripting nor report writing is automated, nor is it possible to automate them. Information is never the same as understanding.

Scripting

The test scripts are not significantly differ­ent in content, only in delivery. Every moderated test has a script—the questions the client wants answered—even if the partici­pant doesn’t know what it is.

In an unmoderated test, on the other hand, the analyst has to share the questions with the participants. The trick is to make sure that these questions don’t lead participants to the “right” answer or tip them off about the purpose of the test.

Also, the online test script has to be more detailed and the follow-up questions have to be thought out in advance. For example, if you think participants might say they hate a particular option, the script needs to be able to branch to a question asking why they hate it. You also need to include open-ended ques­tions so that participants can comment on items about which you hadn’t thought to ask.

Relative Costs

The cost of a remote unmoderated test is comparable to the cost of lab testing.

If you have to, you can do a lab test very cheaply (say, for a small non-profit website) by borrowing a conference room, asking some friends to show up, and doing all the recording by hand. You can’t do a casual remote unmoderated test, however, because there are too many people and too much infrastructure involved.

But when you test high-priced, high-value websites, the costs for in-lab and remote test­ing are similar. As Liz Webb, co-founder of and partner of eVOC Insights, points out, “If you need to run a lab test in an expensive area like Los Angeles, that’s $20,000 to $30,000,” whereas “a remote unmoderated test would start around $30,000 for 200 par­ticipants.”

Also keep in mind that the three best-known firms, Keynote, RelevantView, and UserZoom offer different services at different price points. For example, a one-time test in which you write your own script and analyze your own results costs $8,000 to $1 3,000 depending on the firm. (You’ll probably need some training or handholding the first time, which might add to the cost.)
For a study in which the firm writes both the script and the report, the cost is likely to be between $20,000 and $75,000. All three firms offer yearly licenses at $70,000 to $100,000 depending on the level of serv­ice—this is a good option if you do more than five or six remote unmoderated tests a year. The costs for participants and rewards are additional.

Time Required

Liz’s timeframe for unmoderated tests is four to six weeks: “From kickoff with the client, one week to write the survey, one week for review by the client, one week in the field,” if all goes well, “and two weeks for analysis.” For a lab study, she estimates a week for the screener—it has to be more detailed than the unmoderated one so that the recruiting firm can find the right people; two weeks for recruiting and writing the dis­cussion guide; one or two days in the lab; and two weeks for analysis. She also points out that if you’re doing moderated tests in multiple locations, you have to add at least one travel day between sites.

Comparing the Results

The real differences between moderated and unmoderated tests appear when you look at the results. Deciding whether unmoderated testing should be part of your toolkit depends on the answers to these questions:

  1. Are large numbers of participants impor­tant?
  2. Will online participants’ comments be as helpful as those captured in the lab?
  3. What is the quality of the data?
  4. Will there be too much data to analyze?
  5. What kinds of information are likely to be missing in an unmoderated test?

Are More Numbers Better Numbers?

Asking what the difference is between samples of ten (moderated) and a hundred (unmoderated) participants is really the same question as “How many users are enough?” You have to look at your goals. Is the goal to assess quality? For benchmarking and com­parisons, high numbers are good. Or is your goal to address problems and reduce risk before the product is released? To improve the product, small, ongoing tests are better.

Ania Rodriguez, who used to do unmod­erated tests at IBM and is now a director at Keynote Systems, ran small in-lab studies to decide what to address in her remote studies. She said, “The smaller number is good to pick up the main issues, but you need the
larger sample to really validate whether the smaller sample is representative. I’ve noticed the numbers swinging around as we picked up more participants, at the level between 50 and 100 participants.” The numbers finally settled down at 100 participants, she said.

Michael Morgan, eBay user experience research manager, also uses both moderated and unmoderated tests. “In general, quantitative shows you where issues are where issues are happening. For why, you need qualitative.” But, he adds, “to convince the executive staff, you need quantitative data.”A new eBay product clearly required an unmoderated test, Michael said. “We needed the quantitative scale to
see how people were interacting with eBay Express. [Faceted search] was a new interac­tion paradigm—we needed click-through information—how deep did people go, how many facets did people use?” Since the auto­mated systems collect clickstream data automatically, heatmaps were created that showed his audience exactly where and how deeply people went into the site.

Liz said that a small sample is good for the low-hanging fruit and for obvious user-interface issues. However, “to feel that the statement ‘[most] of the users are likely to return’ is reliable, you need at least 100 responses.” For 100 participants, you’d need results showing a difference of twenty points between the people who said they’d return and those who said they wouldn’t before you could trust the answer. However, “with 200 participants, about 12 percentage points is a statistically significant difference.”

 webpage with textboxes listing statistics relating to how often certain headings have been clicked
Figure 1. Comments are backed up by clickstream analysis.

Will Typed Comments Be Good Enough?

It might not seem that charts and typed comments—the typical output of an unmoder­ated remote test—would be as convincing as audio or video. However, they have their place and can be quite effective.

Ania said that observing during a lab ses­sion is always better than audio, video, or typed comments. “While the test is happening, the CEOs can ask questions. They’re more engaged.” That being said, she affirms that, “you can create a powerful stop-action video using Camtasia and the clickstreams” from the remote tests. According to Michael, “The typed comments are very useful—top of mind. However, they’re not as engaging as video.” So in his reports he recommends com­bining qualitative Morae clips with the quantitative UserZoom data. “We also had click mapping—heat maps and first clicks,” and that was very useful. “On the first task, looking for laptops, we found that people were taking two different routes,” not just the route the company expected them to take. Liz added, “Seeing someone struggle live does make a point. But sometimes in the lab, one person will struggle and the next person will be very happy. So which observation is more important?” The shock value of the remote tests, she says, is showing the client that, for example, a competitor has 80 per­cent satisfaction and they only have 30 percent. “That can be very impactful,” espe­cially if it indicates what the client can do to make the site more useful or usable. For example, on a pharmaceutical site, if the competitor offers a list of questions to take to the doctor, the client might want to do the same. “We want developers to see people struggling,” she adds, “but executives want to see numbers.”

What’s the Quality of the Results?

The quality of the results always starts with the participants: are they interested? Engaged? Honest?

Liz pointed out that “in the lab, partici­pants may be trying to please you. You have to be careful in the questions you ask: not ‘Do you like this?’ but rather ‘What is your impression of this?’ And it’s the same problem online. Don’t ask questions so leading that the participants know what you’re looking for.”

In the lab, you can generally figure out right away that a participant shouldn’t be there and politely dismiss him or her. Online, however, you can’t tell immediately, and there are definitely participants who are in the study just for the reward.
The remote testing companies have meth­ods for catching the freeloaders, the most important of which is watching the studies for anomalies. Here are some of the problems the back-office teams catch:

  • A sudden spike in new participants. This is usually due to messages posted on “free stuff” boards. Unqualified or uninvited peo­ple see the posting and then jump on the test site to start taking the test. When the testing software sees these spikes, it shuts down the study until the research associ­ates, who keep track of the tests while they are running, can check the boards and figure out what has happened.
  • Participants who zip through the studies just to get the reward. The software can be setup to automatically reject participants’ responses if they spend less than a certain amount of time or if they type fewer than a certain number of characters per task.

Another data-quality pitfall is null results— how do you find out if a participant is stuck if you can’t see them getting stuck? One way is to ask people to stop if they’re spending more time on the task than they would in real life (figures 2 and 3)

When the participant clicks the give-up button, the script then asks them why they quit. Not every participant explains, but you generally get enough answers to tell the difference between a technical glitch and a real problem with the site.

instructions for a usability test
Figure 2. Participants are told to give up if the task is taking too long.

website with superimposed red arrow pointing to quit button in the usability test toolbar at the top of the screen.
Figure 3. On a Keynote Systems task-question screen, the give up button is at the upper right corner.

Drowning in Data?

It’s hard enough to compress dozens of pages of notes and ten to twenty videos into a fifty-page report. Working with thou­sands of data points and two to four thousand comments generated in an online study would be impossible if it weren’t for some tools built into the commercial products. All of the remote testing products automatically create graphs from the Likert scales, multiple-choice, and single-choice questions; do correlations between questions; and sort text comments into general categories.

Ania said, “I’ve never found that there was too much data. I might not put everything in the report, but I can drill in two or three months later if the client or CEO asks for more information about something.” With more data, “I can also do better segments,” she said. “For example, check a subset like ‘all women fifty and older versus all men fifty and older.'”

“You have to figure out upfront how much you want to know,” Michael said. “Make sure you get all the data you need for your stakeholders. You won’t necessarily present all the data to all the audiences. Not all audi­ences get the same presentation.” The details go into an appendix. “You also don’t want to exhaust the users by asking for too much information.” The rule of thumb is that a study should take no more than thirty minutes, about three tasks.

Liz took a broader view of quantity: “We now have stable measures from many tests.

To create a benchmark, we ask certain questions—for example, “How likely are you to return to this site?’—only once, and in only one way per study across multiple studies.” With this information, she has a rough idea about how good a client’s website is in comparison with others in the same industry or across industries.

The Final Decision

With unmoderated tests, you get more convincing numbers if you need them. However, preparation has to be more extensive since you can’t change the site or the questions midstream without invalidating all the expensive work you’ve done already.

Unmoderated tests provide you with more data, but they don’t automatically provide you with insight. Since they don’t create or analyze themselves, you can’t remove usability expertise from the equation. The remote testing firms will support you—even if you write the script yourself, they’ll review it for errors—but if you don’t know what you’re doing as a usability practitioner, the results will show that.

On the other hand, if you’re good at spotting where people are likely to have problems, at developing hypotheses, and at recognizing patterns, the abundance of data can be just what your stakeholders want to see.

Usability Practice in China: An Update

About three years ago, I wrote an article for User Experience (Spring/Summer 2003) entitled “East Meets West,” on usability in China. In the intervening years, usability has become a popular topic in Chinese industry. The Sino-European Usability Center that I founded has now been involved in many usability projects. The experiences have deepened our organization’s understanding of the usability process and corroborated the essential views in the earlier article, but they have also caused us to update some of our opinions. Here I would like to share some of my new thoughts about usability practice in China.

Status of Usability in China

It could be said that usability emerged as a field in China only after 2000 and especially since 2003. The lag is due mainly to these

  • The Chinese economic and industrial development level has been low for a long time.
  • From the 1950s to 1980s, China had a planned economy.
  • Disciplines like psychology and sociology suffered from various restrictions before and during the Cultural Revolution.
  • There has long been a preference for technology-related disciplines rather than humanities-related disciplines in Chinese society.

With the rapid growth of the Chinese economy and the process of globalization in recent years, however, Chinese enterprises realized that they had to strengthen their competitive edge to be able to survive and compete in the future. At the same time, more and more multinational companies have entered the Chinese market. These two factors have brought about a rapid increase in demand for usability.

It should be said that usability practice in China started from activities conducted by multinational companies. Since 2000, foreign companies such as Siemens, Microsoft, IBM, Nokia, Motorola, and eBay have been conducting various user-research projects in China. Some of them even set up usability groups.

Increasingly stiff international competition and the desire for development have also made user experience an important issue for many leading Chinese companies, including Lenovo, Huawei, Sina.com, Tongfang, ZTE, Kingdee, and Alibaba.com. Some maintain usability groups of over twenty people and have integrated user-centered design (UCD) into their processes.

The growth of the usability field and a community of interest has led to the formation of professional organizations. Founded in 2004, ACM SIGCHI China (www.hci.org.cn), which consists of the major leading HCI and usability players from academia and industry in China, sponsors an annual national conference. ChinaUI (www.chinaui.com), founded in 2003, is China’s most popular user interface design and usability website with some 85,000 registered members nationwide. UPA China (www.upachna.org) was set up in 2004 in Shanghai and organizes the User Friendly conference every year. The European Union-funded Sino-European Systems Usability Network project (www.sesun-usability.org) will be organizing five seminar and workshop tours around China and conducts joint usability studies in China. The first Harmonic Human Machine Environment conference (HHME) was held in October 2005; approximately 200 people, mainly from computer academia around China, attended. There have also been usability-related activities in the Chinese design and psychology communities. In addition, several websites run by individuals support information exchange on usability topics.

Although the number of people in China dedicated full-time to usability practice is still small, maybe around 400, many product designers and developers are interested in usability. They are young, full of enthusiasm, and eager to learn. Of the people who are most interested in usability, quite a few are from design backgrounds, probably because many companies employ design-trained people for user-interface design jobs.

At the first Sino-European Usability Seminar Tour held in Beijing, Shanghai, and Shenzhen, more than 200 people attended, with 80 percent from industry. Several companies sent more than ten of their employees to the event.

A survey we conducted during the tour revealed that most of these companies have set up usability-related positions and departments. The respondents said they believed that usability would become more important in their organizations and that the major challenges at the moment are to master usability practices and skills and then to get their work recognized by their bosses and product-line units. Therefore, they wanted to attend training courses and learn from case studies so as to be able to start practicing usability in their daily work quickly.

We also found that current interest in usability is mainly coming from consumer electronics producers, website companies, and vendors of off-the-shelf software. People in the digital entertainment sector have just started to talk about usability and playability. But in the domain of customized software solutions and applications widely used in people’s daily lives, usability has not yet become a regular topic.

Usability Practice by Natives vs. Non-natives

In the past, both non-native and native Chinese have practiced usability in China. The former were mainly involved in projects conducted by multinational companies and their projects were usually supported by local Chinese recruiters, translators, moderators, and facilities.

However, valuable information is sometimes hidden in subtle cues or deeply rooted in the social and cultural background, so barriers of language and culture can make a difference in usability studies. With the growth of local usability expertise, the “localization” of usability practice in China is an inevitable trend, and it will be reinforced by a difference in personnel costs.

In the process of developing local expertise, it is, of course, necessary for Chinese to learn from the experiences accumulated over the past twenty years in the West. Nevertheless, there has long been a discussion as to whether the usability methods developed in the West can be suitably used in other cultures. Based on our experiences doing usability studies and consulting in China, the fundamental principles undoubtedly work well for China. However, the operational details—for example, participant recruitment and scheduling, the use of informed consent agreements, and manners and behavior when interacting with participants—need to be adjusted for the Chinese culture. All these kinds of details can best be handled by native Chinese who are familiar with usability methods.

Developing Local Expertise: The Future of Usability in China

The fast growth in demand for usability has been driven by the Chinese economic development and the market globalization process. However, the obstacles and challenges I described three years ago (development teams with no multidisciplinary background and a shortage of UCD skills and experience) still exist.

In a technology-centered culture, technologies and technologists are often at the core of product development. Industrial design has been an established specialty in Chinese universities for the past ten years. In the prevailing culture, user-interface design is viewed as being concerned with the products’ appearance and use—the “external side” of products—so, until now, interest in, and enthusiasm for, usability has not come from the technological core of product design and development but rather from the visual designers. However, even visual designers are unaware of user-centered design and expect their inspirations to come from so-called “good” design examples, not realizing that usable products only come from a user-centered design process.

We need to make a fundamental change. There is no doubt that the future of usability in China lies in the growth of local usability expertise. The most important thing at the moment is to train enthusiastic designers and developers so that they can practice basic usability in their daily work and promote UCD in their organizations. They need training in cost-benefit ratios, practical usability methods, planning methods, and UCD case studies. Through experience accumulated from their usability activities and with some help from experts and books, many have the potential to become full-time usability practitioners.

When a company is starting to practice usability, some guidance from experienced usability professionals and consultants is very helpful. Guidance can help the company master usability methods and apply them appropriately. Also, a successful pilot or showcase project helps an organization gain confidence in pursuing usability. Implementing such a project should fit the company culture, as different types of companies have different expectations for the outcomes. For example, companies that practice quality management processes like ISO 9000, CMM, and Six Sigma might prefer to see something measurable, as they believe in “No measurement, no management.” They might want usability activities to be integrated into their existing processes and require that the documentation and conduct of activities be consistent with the existing quality system.

The HCI curriculum is another area in which we need to make great efforts. Although there are over a thousand computer-science departments in Chinese universities, only ten offer HCI courses to undergraduate students. By providing such courses in more IT and design departments, we could have a new generation of designers and developers who are aware of human-factors issues and who can act as advocates for usability development in China. Some HCI and usability engineering textbooks have already been translated and published in China. ACM SIGCHI China and the Sino-European Systems Usability Network project are also planning to provide HCI training for teachers.

As the biggest consumer market in the world and a giant in product manufacturing, China needs to make the phrase “Made in China” mean “better user experience.” Although China is called “the world factory,” we still have a long way to go to become as strong in design and innovation as we are in manufacturing.

Realizing this, the Chinese government launched a nationwide initiative in 2005 to improve the possibilities for independent innovations. In the recently published National Science and Technology Plan for 2006-2020, we have seen for the first time, phrases like “human-centered” and “ease of use” appearing in several places. Since the study of user experience is an important source for technological innovation and for building a human-centered society, we have every reason to expect an even brighter future for usability in China.

Designing for Vulnerable Users: Illustrations (May) Help Understand Complex Health Websites

The Internet offers a useful way to share and consume complex health information. Not only can health information be found through popular search engines, such as Google, but hospitals and doctors can also provide their patients with specific health information through the Web via patient portals and hospital websites. However, information is not always shared in such a way that is useful for the end user. Websites often include complex health jargon with the expectation that since it is available, it is usable. However, as many user experience practitioners are aware, simply putting information on the Internet does not make it usable.

One area of our research is the use of illustrations to aid in the understanding of complex health-related text. In this article, we discuss the results of various studies that examined the role of illustrations in understanding complex online health information. We will highlight the main findings covering three areas:

  1. Using illustrations in online health information
  2. Measuring user experience
  3. Lessons learned from using illustrations

Using Illustrations in Online Health Information

Illustrations are widely used online. While the majority of online information is still largely textual, text is often accompanied by photos, graphics, diagrams, and other types of visualizations. Illustrations have the potential to attract attention, facilitate learning, increase enjoyment and engagement, and accommodate poor readers. Since most health information often includes complex language or medical jargon, illustrations play a crucial role in the effectiveness of communicating health information on websites. This is especially the case for people who have difficulties using online health information in general, such as people with limited health literacy skills and older adults who might experience natural age-related cognitive decline.

But what does it mean to have illustrations on a webpage? Do they guarantee success, and if so, what kind of success? Are all types of illustrations equally effective? For whom are illustrations worthwhile? These questions show the diversity of elements that UX practitioners should consider when using illustrations in online health information. We researched two specific types of illustrations: illustrations that aim to explain (cognitive illustrations) and illustrations that are intended to enhance enjoyment (affective illustrations). For example, a cognitive illustration can be a drawing of a complex procedure and an affective illustration can be a photo of a doctor-patient interaction (see Table 1). Consider both types of illustrations when providing information online because both have the potential to alter how people consume and understand complex information. However, do not assume that merely adding images is sufficient for all users.

Table 1. Cognitive and affective illustrations

Type of Illustration Definition
The treatment with the special needle is performed percutaneously (through the skin) A cognitive illustration can be referred to as a picture, photo, or drawing that aims to explain (parts of) textual information.

Cognitive illustrations help to give a better understanding of online health information sources and may improve memory for medical information. These illustrations often use arrows or words to refer to textual information.

14-3-Bol-Figb An affective illustration is most often a text-irrelevant photo that aims to enhance enjoyment and positive emotions from information that is displayed on websites.

Affective illustrations do not directly aim to enhance learning from websites, but may indirectly do so by making a website more attractive and interesting for web users.

Measuring User Experience

Even though there seems to be a general consensus that “illustrations are worth a thousand words,” studies have shown inconsistent results regarding the effectiveness of using illustrations. Whereas some studies report positive effects of illustrations (for example in terms of improved memory, comprehension, and attention), other studies fail to find evidence for such effects or even demonstrate contradictory effects. While pre-testing affective illustrations for our studies, a 61-year-old female patient said about one of the affective illustrations, “This doctor appears friendly, competent, and careful,” while a 54-year-old female patient shared a comment about the same illustration: “A smiling doctor is non-information…remove this picture, I would say. If this illustration is attempting to give the feeling of good care, it is not working for me.” We discovered that there are individual differences in the extent to which certain images are helpful and desired.

Therefore, it is important to understand that illustrations have a certain effect on various users and that people differ in their needs and preferences when looking for and consuming online health information. Eye-tracking and think-aloud protocols can help us gain more insight into how online health information is used and for whom illustrations are most beneficial.

How are illustrations used?

There is ample evidence that users pay more attention to text-illustrated information than text-only information. Illustrations may help to attract the reader’s attention or direct attention to text-relevant pieces of information. Illustrations can also be used as a cue to read pieces of information when the accompanying illustration arouses interest. We recently conducted a think-aloud study where we asked participants to think out loud while navigating a health website. Overall, we found that illustrations were perceived as useful, and sometimes even necessary, for comprehending complex text information.

However, in a recent eye-tracking study among 61 Dutch adults, we found that adding cognitive illustrations to a webpage did not increase the total time spent on the webpage. Simply adding extra elements to a webpage might attract or direct attention, but this does not necessarily mean that people will spend more time on the website overall, or that the images help users understand the information. The effectiveness of text-illustrated information cannot be fully explained by mere attention to illustrations.

In another eye-tracking study among 10 younger and 10 older US adults, we compared levels of attention between cognitive and affective illustrations. We found that overall users spent only some of the total time on the webpage looking at the illustrations. Moreover, there was a difference in the amount of attention to the cognitive versus the affective illustrations: whereas cognitive illustrations received some attention, affective illustrations were nearly ignored (see Figure 1).

(a) with cognitive illustrations, there are fixation patterns on the images as well as the text. (b) with affective illustrations, there are no fixations on the images.
Figure 1. Eye-fixation patterns when (a) cognitive versus (b) affective illustrations are presented on a webpage. Red indicates high levels of attention, followed by yellow and green that demonstrate lower levels of attention. These eye-fixation patterns are from the 10 older adults.

For whom are illustrations beneficial?

When we broadly believe that illustrations are effective, we assume that every reader makes active use of illustrations when they are present. But as shown in the examples above, we already know that some types of illustrations simply do not receive much attention. So then who actually makes use of illustrations and for whom do they lead to positive outcomes?

As illustrations may provide a compensatory function, we might expect that illustrations are particularly helpful for poor readers. On the other hand, research suggests that illustrations are especially helpful for good readers who can effectively integrate illustrations with text to establish a complete understanding of the information. Ultimately, illustrations provide different readers with various benefits. For instance, we have shown that when people with adequate health literacy view illustrations that accompany complex text, they spend less time reading text information and yet still recall the same amount of information as when they spend more time reading text information with no illustrations. However, illustrations appear to be more helpful for people with limited health literacy: when they attend to illustrations, they recall more of the text-illustrated information compared to when no illustrations are available.

It is also suggested that older adults might especially benefit from illustrations. Older adults are considered vulnerable for poor online health communication due to declining basic cognitive abilities and less experience with Internet technologies. With potentially diminished total cognitive capacity, older adults may benefit from having both verbal (text) and visual (illustrations) information. Yet our eye-tracking data show that older adults spend less time looking at illustrations than their younger counterparts. Interestingly, when they spend enough time reading text information, older adults in particular remember information more accurately. In these data, illustrations were only found to be beneficial for younger adults.

Lessons Learned From Using Illustrations

So what does it mean to include illustrations on a webpage? Do they guarantee success and if so, what kind of success? Are all types of illustrations equally effective? Illustrations certainly have the potential to aid in understanding complex health websites. People are often drawn to images and they can be used to break up large blocks of text.

But not all users are the same, and it is important to test images with target end users to ensure that the images have the intended effect. The worst case would be to include images that deter people from actually using the website and the information. The best case would be to include images that are universally helpful. While this may not be easy to do, user experience practitioners can work to understand users’ needs and desires. We should consider various ways of including images, such as displaying multiple images that intend to portray similar information in ways that meet different users’ needs. If we work to achieve this goal, we are one step closer to developing effective online health materials that are tailored to the needs of a wide variety of users.

[bluebox]

More reading

Bol, N., Romano Bergstrom, J. C., Smets, E. M. A., Loos, E. F., Strohl, J., Van Weert, J. C. M. (2014). Does web design matter? Examining older adults’ attention to cognitive and affective illustrations on cancer-related website through eye tracking. In C. Stephanidis & M. Antona (Eds.), Universal access in human-computer interaction. Proceedings HCII 2014, Part III, LNCS 8515 (pp. 15-23). Switzerland: Springer International Publishing.

Bol, N., Smets, E. M. A., Eddes, E. H., De Haes, H. C. J. M., Loos, E. F., & Van Weert, J. C. M. (2015). Illustrations enhance older colorectal cancer patients’ website satisfaction and recall of online cancer information. European Journal of Cancer Care, 24(2), 213-223. doi:10.1111/ecc.12283

Meppelink, C. S., & Bol, N. (2015). Exploring the role of health literacy on attention to and recall of text-illustrated health information: An eye-tracking study. Computers in Human Behavior, 48, 87-93. doi:10.1016/j.chb.2015.01.027

[/bluebox]

Getting Better Design Feedback: Anonymized User Research

Smartphone screen with a checkbox for Anonymize my Information and a Submit button.

The early 2010s will not be remembered as good years for internet anonymity. From news outlets likethe CBC, ESPN, and Huffington Post, to social media sites YouTube and Facebook, many leading organizations have at least dabbled in the prohibition of anonymous comments from its users (even sometimes backpedaling in the face of criticism after implementation).

In the user experience (UX) design field, the candid, unadulterated comments and feedback that researchers gather from usability testing or interviewing participants are the most valuable assets to inform a design. Surrendering a participant’s anonymity during a user research interaction risks compromising the authenticity of the feedback being sought, ultimately resulting in a less informed—and probably less effective—product.

User anonymity is particularly important in the enterprise research environment, that is, conducting user research and usability testing for tools, applications, and services administered by the participant’s employer. Participant feedback elicited under the duress of perceived employer review or scrutiny (imagine the mindset: “Am I going to lose my job because I expressed dissatisfaction with my supervisor’s website?”) yields biased input, putting the validity of your design at risk.

Even outside of the enterprise environment, you may be conducting research or testing designs for products that users trust with confidential or personally identifying information. For example, have you ever designed a website that provides health information to the public? If you’re facilitating a contextual interview with a member of that target audience, there’s a good chance that the participant may share a personal story about a medical condition that they or a loved one sought treatment for. This story could prove valuable for your design, but you need to protect the privacy of the storyteller—which is why facilitators should anonymize all participant information by default.

User experience professionals should be conducting research studies that are supportive of anonymity-protected user feedback throughout every step of the design process.

  • Recruitment, Outreach, and Screening – Understanding and being transparent about how candidates were sourced.
  • Usability Facilitation – Gaining trust of the user during contact phases (from the first contact or invitation through to the dialogue—whether it be during activities like contextual interviews or cognitive walkthroughs).
  • Summary and Findings – Protecting the user’s anonymity after findings are collected and summarized (and negotiating with insatiable stakeholders and team members who are inclined to identify the participant by name).

Set Expectations with Team and Stakeholders

Protecting anonymity during user research may be instinctive for UX professionals, but the same isn’t necessarily true for cross-functional colleagues and client stakeholders. Imagine their surprise after you’ve spent hours conducting dozens of user research interviews with substantive findings, only to hear during a client presentation that the senior executive stakeholder was expecting names and contact information for each of the participants— especially the ones that had not-so-nice things to say about the organization’s Help Desk experience.

Do’s

  • Set expectations early about how research participant information is protected from the onset of the project, as opposed to waiting until after the studies have been conducted. Position this expectation as a benefit to the client stakeholder: “We’ll make sure that your staff’s input is heard and protected, ensuring that the ultimate design isn’t formed from external influences.”
  • Be clear to the participant about how you will protect his or her anonymity.
  • Socialize those expectations before beginning any research effort. Reinforce the idea that users are the product owner’s most valuable asset, and that keeping them happy, engaged, and trusting will lead to a better outcome.
  • Include limited persona information with key pieces of feedback—like broad department or division names—specific enough that it will resonate with stakeholders, but vague enough that no individual name can be deduced.

Don’ts

  • Don’t let manager’s observe sessions or watch videos of their direct reports.
  • Don’t identify an individual’s role if there is other corroborating information available that would uniquely identify the participant.
  • Don’t ever let a team member or stakeholder strong arm you into revealing identifying information about participants who were already told that their feedback would be kept anonymized! Failing to live up to your word regarding anonymity protection breaks your trust with not only that participant, but also the participant’s peers. If you and your user research team are not careful about participants’ privacy, word will get around that you team is not trustworthy.

Recruitment, Outreach, and Screening

After contacting a research candidate to gauge interest in participating, you may have gotten the response, “How did you get my name?” Fair question.

User experience design professionals are no stranger to the challenges of recruitment. Our eyes tend to light up with excitement when we’re handed a lengthy list of usability testing candidates complete with contact information all ready to go. However, consider how those names got on those lists in the first place. Did the candidate opt-in voluntarily? Or was his or her inclusion passive—scraped unknowingly from an unrelated registration source?

When handed a list of recruitment candidates, have a clear understanding of where those names came from, and be prepared to answer pointed questions about how you got their contact information, who nominated them, and what other information is being stored.

Honor requests to be removed from recruitment lists, and see to it that your sources do the same. On the other hand, engaged participants may be interested in helping again, so be sure to document candidates who are willing to return in case you need quick feedback with minimal screening for an urgent design need in the future.

During User Research Sessions and Usability Tests

Participants that you engage with for user research activities are forming their sense of trust in you from the moment you make first contact. Every interaction you have with the participant—whether written, oral, or in person—is moving the needle one way or the other about how much feedback he or she is willing to offer and how candid they’ll be about it.

When first contacting a research candidate who’s not otherwise familiar with you or your organization, be transparent about who you are and what you do. Participants may be more comfortable if you’re able to establish contact through a mutual connection. Explicitly mention the name of a colleague, supervisor, or department that will resonate with the participant as a trustworthy source.

At the beginning of the actual user research session—whether it be a contextual interview, usability testing, or a cognitive walkthrough—always conclude your introduction with some disclaimers that clearly define exactly what kind of information is being collected and how it will be used. Give the participant confidence that their identity will be protected even if their views and opinions about the broader organizational process and past design efforts aren’t so positive.

Understand that with anonymity ensured, some participants may be more willing than you imagined to “vent” about all different aspects of their user experience troubles, ranging from web applications to service design—sometimes even outside the scope of your study! Use this additional context to your advantage; refactor your designs with these external pain points in mind, and use them to draw big picture conclusions about holistic design problems that can’t be solved in the silo of the particular interface you were originally targeting for the study. Your stakeholders should appreciate that you’re trying to solve the design problem from more than one angle.

Telephone Sessions and Virtual Meetings

Even though user research efforts are sometimes limited by budget, scheduling, and logistical constraints that prevent you from conducting in-person studies, that’s no excuse for sacrificing your participants’ comfort and anonymity over the phone or through a virtual meeting.

In many ways, without the physical contextual clues offered by an in-person study, user experience professionals need to go out of their way to convey to remote participants that their anonymity will be protected. Imagine it from the position of the participant who suddenly finds him or herself in the middle of an in-depth phone conversation with someone they’ve never met, being asked to convey their opinions and attitudes about a product or service that may pertain to their employer. How comfortable would you feel in that situation?

Make sure you begin remote sessions using the handset of the telephone, rather than speakerphone; ask for permission from the participant to place the call on speaker, even if no one else is with you. If there is someone with you, make sure to disclose that person’s identity before continuing. You can address this as simply as “I’m joined by my UX colleague Sam. Is it alright if I place the call on speakerphone so she can listen, too?”

On the topic of undesired eavesdropping, particularly if you share work space with colleagues or stakeholders, make it clear that that during user research sessions unannounced “drop ins” are not permitted. Imagine the betrayal perceived by a participant hearing a superior chime in during the middle of a qualitative research question, “Hey who’s that on the phone you’re talking to?” You might also try to find private office space.

Summary and Findings

With your user research studies wrapped up and usability tests complete, you should be on your way to a well-informed, user-centered design solution. But what becomes of all that great feedback and input you’ve received from your users along the way?

Next up in your design process, you’ll surely find yourself sharing the fruits of your labor with project team members and stakeholders. They’ll be wondering what their users were interested in, what they thought about the experience of using their product or website, and sometimes, what their names are.

There are myriad of reasons why project team members or stakeholders may ask for the names of usability participants. Maybe the marketing team sees a cross-sell opportunity after hearing one of your customer journeys. Perhaps a project manager is concerned that a negative piece of feedback won’t go over well with a content owner and wants to see if the participant might want to think about the experience through a different lens. Or, conceivably a stakeholder overseeing an enterprise product might be unhappy with how a user group told you they’re “misusing” a company tool or website, and wants to personally put a stop to it.

As a responsible user experience practitioner, you’re not only prepared with the raw data to support your design, but you’ve also done your due diligence to properly scrub it of identifying contact information. There are a number of effective techniques for anonymizing participant names in your studies.

  • Username Key. Always stored separately from your findings, create a table with each participant’s real name, each with a corresponding alphabetical letter. For example, Mary as participant A, Jose as participant B, and so forth. On the findings, each piece of feedback is only identified by the alphabetical letter. This is the most traditional technique for anonymizing user research participant data, but the method does add a bit of complexity for the practitioner, having to keep and maintain a separate protected file.
  • Encoded Name.: To reduce the risk of at-a-glance identification by the vast majority of audiences—and without having to maintain a separate key file—consider web-based encoding tools. For example, the name Melissa Abraham is masked as TWVsaXNzYSBBYnJhaGFt when encoded using Base64. Without getting into the computer science of base encoding (just understand that it’s a technique used to convert clear text to symbols, often for the purpose of communication among machines), the names of your participants are relatively safe to eyes peering over your shoulder. Don’t mistake encoding for encryption however; the anonymity of your participants won’t be protected if shared electronically with a computer science savvy audience. This technique merely guards against casual attempts to glean identity in a presentation setting or working environment.
  • Pseudonyms. If the audience for your user research findings is having trouble relating to alphabetical letter or encoded name representations, give your participant names some life—someone else’s that is. While typically reserved for persona profiles, using alternative names with stock photos can give your usability test results some character. Stakeholders might be more receptive to negative feedback about an anticipated feature when associated with a friendly-looking face, even if sourced from a third party digital asset library. Be careful about using funny names (“Silly Sally”) or names of famous people (“D. Trump”) as that might reduce the credibility of your findings.

Of course, the above recommendations largely apply to written findings—raw data and summaries. If you’re collecting feedback on designs using other formats, like audio or video, your ability to completely anonymize the content may be more limited. For example, while video recordings of usability testing and cognitive walkthroughs need only contain footage of the design being used (a web browser or mobile device), the participant’s voice will remain intact. Short of having access to voice distortion equipment, most UX professionals probably aren’t going to get around that. Instead, think carefully about when to begin recording; don’t begin until after introductions and names are exchanged. On the media files themselves, use file naming conventions absent of any identifying participant information as described above. Finally, take solace in the fact that you probably don’t need to distribute the raw data outside of your UX team anyway; you’re more likely to summarize findings in written form for stakeholders.

Rather than keeping these techniques in the back of your mind, try actively working them into the plans for your next usability test. Brief stakeholders on the value that anonymized user research will deliver to the design of the product. Before screening begins, understand where your candidates are sourced from. Update your research script to disclose anonymity protections as part of your greeting. Finally, make the necessary changes to your summary template so that your participant information is protected. The change in your findings will be subtle and gradual, but over time, you’ll glean those unfiltered insights that may have been held back before.