Skip to content Skip to footer

Search Results for: interaction – Page 2

Business Origami: Learning, Empathizing, and Building with Users

Design requires empathy and a deep understanding of users’ workflows and mental models, which is usually accomplished through user research. Generative research methods can be greatly improved through the use of collaborative design and research activities (most commonly referred to as participatory design), where end users work together to create an artifact involving their day-to-day work. By working closely with these end users, the product team is able to capture first-hand their attitudes, empathize with the target users, and understand what solutions may or may not be effective for the audience. Business origami is a quick and easy-to-explain collaborative design method that can be used to understand end users’ environments and teammates, as well as supplement findings from other qualitative research methods.

Participatory design activities, such as business origami, can complement the data gathered from interviews and observations by providing the end users’ own perspectives of their interactions with their co-workers and surroundings. From this data, artifacts such as personas and as-is experience maps can be created. Both of these artifacts can be useful for creating empathy and a common understanding of the target user across the product team. However, the data gathered using these two research methods might be biased in various ways by the designer, as well as by the participants.

Examples of bias include:

  • Question-order bias: The order in which questions are asked.
  • Leading/wording bias: The way in which questions are asked.
  • Confirmation bias: The research questions that are guiding the interviews.
  • Social desirability bias: The possibility of participants altering their behavior during observations to change how they are perceived by the researcher.
  • Interviewer bias: The possibility that the interviewer’s body language or unconscious gestures affects the way interviews will answer a question.

While these biases may all be controlled to some degree by proper preparation and execution by the interviewer/observer, there is still potential for bias given that each participant has a unique background that is unknown to the researcher. Business origami shifts the dynamic of designer and end user in a way that minimizes the biases listed above by placing the end user in control of creating an artifact (as opposed to having the designer assume power through their control of the interview or through their position as an outside observer).

Introducing Business Origami

The goal of business origami is to have end users create a “map” of the various people involved in the ecosystem surrounding a specific product, design, or other solution (for example, a specific software product or an information kiosk in a public area). The product team members who are involved in the activity should take on the role of facilitators. The end users will place various small paper “icons” (see Figure 1) on a physical space (for example, a table or white board) to identify the people with whom they interact, the ways in which they communicate with each other, and the locations where these interactions take place. End users should also write relevant information on the icons or space. For example, end users could place separate icons for “engineer managers,” “engineers,” and “vice-president of engineering.” This activity is generally most effective when done in early or generative stages of user research for the following reasons:

  • It helps generate empathy for a target user group by providing clarity on the environment and interactions currently involved in target users’ lives.
  • By understanding the current ecosystem involving the target user group, designers can understand how to design most effectively for their target audience.
  • The map that is created can be preserved and shared with others on the product team who were not physically present for the activity.

Business origami can also be an effective method to use when target users have many complex interactions with other people and technologies. By visually mapping out these interactions, the product team can quickly understand which interactions could be improved through software or indirectly modified or affected through the introduction of software (see Figure 1).

Screenshot of participants working on a business origami exercise that attempts to create a shared understanding of end users and their environments.
Figure 1. Participants work on a business origami exercise that attempts to create a shared understanding of end users and their environments. (Credit: Nearsoft, Inc)

Preparation, Materials, and Execution of Business Origami

There are four big steps to running a business origami activity.

1. Outline goals and recruit participants

Before planning the actual business origami activity, it is important to decide the goals and who the proper participants are. For example, a past project I worked on involved understanding the workflow and interactions involving developers on the IBM Watson Solutions teams who worked with IBM’s clients to create software such as Watson for Oncology and Watson for Clinical Trials Matching. Both software were developed to assist clinicians with decision making about patient diagnoses, as well as identifying possible clinical trials for which patients may be eligible (given their diagnoses). My goal as a designer was to use this information to help design new web-based software to help less technical clients build their own similar products for different industries beyond healthcare. Therefore, I recruited a representative group of the developers involved in those projects. Business origami is generally most effective with 3-5 participants, since two end users may not have knowledge of all interactions and 6 plus participants may not all be actively engaged. Given these guidelines, I recruited two groups of four participants each (eight total). One group created a map for Watson for Oncology, and the second group created a map for Watson for Clinical Trials Matching.

2. Prepare the business origami activity

Business origami requires having a flat surface for placing icons. White boards and large pieces of paper (for instance, butcher sheet or cardboard) are ideal since they allow participants to write comments on the paper/board itself, in addition to arranging the icons. The designer who is facilitating the activity should ensure an appropriate space with these types of surfaces is available. The designer should also print off and cut out multiple copies of business origami icons. End users will arrange these paper icons to outline the various people, places, and communication methods (see Figure 2) that are involved in their day-to-day work. Depending on the type of end users involved, the designer may want to create other icons beyond the ones provided on the website above. For example, given the IBM Watson Solution team’s focus on healthcare, specific icons were made to represent male and female doctors and nurses.

When planning out a business origami exercise, leave blank the labels underneath each icon so that participants can fill in what they believe to be appropriate. Also, print off several “blank” icons (shown in Figure 2). For example, the label portion is included, but the top part is left blank so participants can draw in a new person or object that may be needed.

Screenshots of examples of business origami icons that represent different end users (on the left) and modes of communication (on the right).
Figure 2. Examples of business origami icons that represent different end users and modes of communication. (Credit: SAP)

3. Introduce the business origami activity

Provide an overview of the activity and the goal before beginning. It is most important to emphasize that providing many details on the stakeholders and interactions is important, but that the artifact created should be understood by a third party that was not present during the activity and that is not familiar with the participants’ line of work. Generally speaking, these are the instructions that should be given:

  • Use different icons to represent people, places, or things that are important parts of your work environment.
  • Draw arrows between these icons to represent the ways in which they interact and communicate with each other.
  • Write comments on the board/paper to help explain any important details.
  • Make sure to think about anything that helps with those interactions and communications and to use icons on top of arrows as needed.
  • There is no “right” way to map since each of you have had different experiences. You can also move or remove icons and comments if you realize that is needed.

4. Conduct and discuss the Business Origami activity

Spend between 20 and 45 minutes on the map activity to allow for enough discussion, creation of an initial artifact, and modification of the artifact. The designer should observe and answer any questions the teams may have, reminding them that there is no single “right” way to map out the people, places, and things involved in their work life. Fifteen minutes should then be spent to share artifacts with others who participated. If there are multiple teams, a discussion will also help identify possible similarities and differences between the artifacts. Below is an example of what completed maps may look like (Figure 3).

A photo of a completed business origami activity made by the IBM Watson Solutions team to illustrate the IBM and client stakeholders (as well as the interactions within and between these groups).
Figure 3. A completed business origami activity made by the IBM Watson Solutions team to illustrate the IBM and client stakeholders (as well as the interactions within and between these groups).

Feedback from Participants

The IBM developers said they enjoyed the activity since it was helpful for them to see the “big picture.” There were two main developer roles, and both appreciated the opportunity to learn more about the others’ work and interactions with the client. Some developers noted that the activity was difficult to understand at first given that they do not traditionally do “open” activities. This comment shows the importance of properly introducing the business origami exercise, as well as including examples of completed business origami activities. Product managers and other stakeholders who saw the completed business origami exercise also had praise, stating that the maps helped them to understand and empathize with their end users’ complex workflows.

Direct Input Creates Empathy

In participatory design activities, end users (rather than a designer) create artifacts that help inform the design and understanding around a specific user group and ecosystem. These artifacts could include the design of a digital artifact (like software), physical artifact (such as gallery or bulletin board), or service (checking yourself and your baggage in at an airport). By involving end users directly, the designer can gain a better understanding of end users’ perspectives and values, as well as understand the types of artifacts that may best meet their needs. Participatory design may also be effective if paired with other qualitative research methods that have potential for bias (such as interviews and observations).

Business origami is one type of participatory design that is (typically) a generative design activity. It can help a product team understand the various types of people involved, the objects and locations involved, and the flow of information.

For my work in industry, this method has been crucial in creating empathy for end users and their interactions with their teammates and their environments. Since the design team usually interacts with end users the most out of the various product team members, business origami can be a quick and powerful way to create alignment between the design, development, and product management teams. In less than an hour, an easy-to-understand artifact is created by end users and can be archive for reference. Business origami can also be done remotely if travel to end users’ workspaces is not possible or if end users are not collocated (either by screen sharing or through software such as Mural.)

 

[bluebox]

More Reading:

[/bluebox]

Listen Up: Do Voice Recognition Systems Help Drivers Focus on the Road

Over the past few years, auto manufacturers have created infotainment systems, integrating control of multiple devices (including cellular phone, climate control, audio, navigation, and other media) into a single user interface. However, most of these infotainment systems (BMW iDrive, Audi MMI, Mercedes-Benz Command) still require manual input and visual attention. Microsoft and Ford have collaborated on a new infotainment device, Sync, which allows drivers to interact with mobile devices using voice commands. This feature allows drivers to keep their eyes on the road and hands on the wheel.

Sync makes infotainment technology available to a broad market base, with Sync as a low-cost option on a wide range of models (including the entry-level Ford Focus). Ford estimates that by 2009 over one million units will have been sold. If successful, Sync may soon be the most influential infotainment technology on the market. The expected widespread distribution brings up some questions about the usability and impact on workload that infotainment devices and voice activation will have on drivers.

Sync’s voice recognition technology is its primary strength over previous infotainment systems because it virtually eliminates scrolling and searching through a visual interface. This strength, however, may actually be a weakness if it does not provide the driver an easy and efficient method to complete tasks. For example, speaking voice commands instead of pressing buttons may distract drivers less, but if making a cell phone call using voice commands takes twice as long as pressing buttons, then a driver will be distracted for a longer period of time resulting in greater risk to driver safety.

Methods

We performed a timeline analysis to see if completing two common in-vehicle tasks—calling a contact using a cell phone and playing a track using an iPod—were completed faster using Sync, compared to using the native device interfaces. We also compared the time it took to complete different portions of each task, because Sync’s voice recognition technology may facilitate some portions of the task but be inefficient during others. For example, to play a song on an iPod using Sync, a driver can simply say the title of the song. Sync’s voice commands eliminate searching and scrolling through a long list of songs, but after each command, users have to wait for Sync to respond and prompt for the next command, which may negate any time savings. Decomposing total task time allowed us to identify both the strengths and weaknesses of Sync’s voice recognition technology compared to manual interactions.

In addition to assessing the performance of Sync’s voice recognition technology, we were interested in determining how this technology may impact usability for a broad driver population. To evaluate Sync’s usability we decided to explore potential errors and outcomes that drivers may encounter using Sync.

We performed a Failure Modes and Effects Analysis (FMEA) to identify all possible errors for each task step, along with the source, likelihood, probable outcome, and severity of each error. From this information we were able to identify the types of errors that are most likely to occur and the mechanism that would produce each error. Identifying the underlying source of the errors allowed us to evaluate whether proper counter measures are in place to minimize the severity and aid in recovery. Error recovery features not only reduce the negative impact on the overall user experience, they also help to minimize the error’s impact on driver distraction and safety.

Analysis and Findings

Like other infotainment systems, Sync has a broad range of capabilities and supports a number of plausible in-vehicle tasks. Since we were not interested in all these functions, but rather general usability, we decided to evaluate Sync on two common tasks. We observed and recorded two drivers familiar with Sync as they called a contact on a cell phone and played a song from an iPod in a stationary Ford automobile. Then we compared task performance using Sync with the performance of two other testers performing the same tasks using a cell phone and an iPod. The video recordings were coded and time-stamped and used for the time-line analysis and FMEA.

Timeline Analysis

The timeline analysis allowed us to evaluate the relative efficiency of Sync interactions compared to manual interactions. Surprisingly, we found that, on average, tasks were completed faster using the iPod and cell phone than when using Sync. Users took one second longer to play a music track and fifteen seconds longer to call a contact using Sync. Users did not spend any time scrolling or searching through device menus using Sync’s voice commands and still completed the tasks more slowly.
Based on results from a hierarchical task analysis, we decomposed total task time into four sub-task goals:

  • Initialization – Any time users spent turning on or activating a device
  • Menu access – Time users spent navigating to or scrolling within device menus
  • Goal execution – Selecting desired function (pressing Send to place a call, locating a song) or issuing a voice command
  • System response – Time users had to wait for the device before continuing the task

After breaking total task time into these four categories, the source of Sync’s slow performance was clear: the majority of user interaction with Sync was spent waiting to proceed (i.e. system response; see Figure 1). In fact, if users had not had to wait for Sync, average task time with Sync would decrease by sixteen seconds, a 60 percent time saving.

The excessive waiting resulted from the turn-taking structure for completing tasks with Sync. After each voice command, Sync repeats the command to confirm it and then gives the user a prompt for the next command. This process lasts only three or four seconds, but as users navigate through multiple menus to complete the task, this delay accumulates and task efficiency diminishes.

Time-line analysis also highlights how the task structure differs when completing tasks with Sync and with direct interaction. When users worked directly with the iPod or cell phone, they performed each task in a single continuous interaction. Completing tasks with Sync, however, required alternate chunks of user command and Sync response. This turn-taking is a natural byproduct of user interaction via voice commands, and, in essence, users engage in a conversation with Sync.

Though Sync’s voice recognition system increased overall task time, Sync is not necessarily more distracting or harmful to driver safety. In fact, the system response delays may actually help drivers focus on the driving task. Consider a driver who is selecting a new song using an iPod while driving. Because the iPod allows continuous interaction, the driver must divert attention away from the road for an extended period of time. Continuous interaction and attention is not required with Sync; drivers have opportunities to focus all their attention on the road while completing distracting tasks. Interacting with mobile devices using Sync may be less distracting and may reduce the impact on driver safety.

Failure Modes and Effects Analysis

FMEA gave us a chance to identify Sync’s limitations by looking at the source, outcome, and probability of errors that users may encounter. All possible errors in both tasks were evaluated in the FMEA.

As an example, we discuss error scenarios during the goal execution stage (for example, “Call home”). On average, nine distinct errors could occur during the goal execution stage in our tasks. Nearly 75 percent of these errors resulted in task failure, requiring the user to restart. Most of the errors leading to task failure resulted from inaccurate processing of voice commands, incompatibilities between Sync and the paired mobile device, and general errors related to Sync’s voice recognition technology.

Errors caused by Sync’s voice recognition technology do not necessarily condemn a task to fail, but we found that Sync seemed to lack quick and easy methods for error recovery. As a result, voice recognition technology errors were more likely to result in task failure. For example, if drivers want to play a song by Bon Jovi, then during the goal execution stage they would say, “Play artist Bon Jovi.” Sync would then process the command and respond with the following, “Playing artist Bon Jovi.” But what does the user do if Sync responds incorrectly with, “Playing artist Billy Joel?” Currently, there is no way to interrupt and correct Sync or revise an incorrect command to fix the error. The only option is to repeat the task, prolonging the overall time the driver is distracted. The FMEA confirmed that overall user experience and driver safety are highly dependent upon the performance of Sync’s voice recognition technology.

Recommendations and Conclusions

Our evaluation of Sync supports voice recognition technology as an improvement over manual interactions with devices. Though tasks took longer to complete using Sync, the turn-taking conversation with Sync is more compatible with driving and provides more opportunities for the user to interleave driving with secondary tasks. Sync’s voice recognition technology is not without its flaws and would benefit from a broader menu structure that allows users to bypass menu levels or move directly to the desired function, especially at the higher levels of the menu hierarchy. On the other hand, this new feature could make Sync more complex because it expands the number of available voice commands at each menu level. This cost could be outweighed, though, by the benefit of reducing system response time and overall task time.

Based upon the FMEA findings, one feature missing from Sync was a quick and easy method to correct errors. Incorporating easy steps for error recovery is essential since Sync’s voice recognition technology will never perform flawlessly. A simple voice command—Back—might meet this need. If users make a mistake or if a voice recognition error occurs, users could say “Back” at any time to move to the prior task step. The “Back” command may not be the best or only option, but the spirit of this recommendation—an easy and readily available method for repeating a previous or current step in order to correct an error—would be a welcome improvement over the current requirement to restart the entire task.
In this article we set out to examine if Sync’s voice recognition technology improved upon previous manually based interactions. We also wanted to explore Sync’s potential impact on drivers by considering its usability. In general, Sync and voice recognition technology improve upon manual interactions by interleaving sub tasks with the driving task more efficiently than manual interactions and may help to reduce driver distraction. Though current voice recognition technology is not perfect, simple design features can be incorporated to mitigate the severity of errors and enhance the user experience.

Drivers everywhere should be excited about the widespread availability of Sync and voice recognition technology in their vehicles. Not only will they be able to show off new infotainment technology, but most importantly, they will be able to interact with all of their mobile devices in a safer, less distracting manner.

[greybox]

Timeline Analysis

To accomplish tasks using Sync, drivers navigate through a series of menus until they reach the desired function. Sync’s menus are arranged from general media devices (for example, audio system, phone, or radio) to more specific device functions (for example, “USB” or “Dial”) as users move deeper into the menu structure. The advantage of a hierarchical menu structure is that the number of choices at each level is reduced. The disadvantage is that the menu structure is deeper and users must navigate through more menu levels. Navigating through additional menu levels can be slow and cumbersome, and in Sync’s case, results in longer task time due to system response time after each voice command.

Instead of a hierarchical menu structure, Sync could adopt a broad menu structure at the highest levels that would allow users to bypass menu levels and go directly to the chosen device or function. A broad menu structure reduces the total number of voice commands and increases system efficiency. Here is an example of Sync voice commands that demonstrates how a broad menu structure can increase task efficiency over the hierarchical menu structure.

listen

Compared to Sync’s current hierarchical menu structure, the broad menu structure results in two fewer voice commands. Considering the three- or four-second system response time after each voice command, the broad menu structure would result in time saving of up to eight seconds. Incorporating a broader menu structure with Sync would reduce system response time, overall task time, and the potential impact of Sync on driver safety.  [/greybox]

Solving Interaction and Design for All: Tackling UX Challenges with Accessibility Insights

Jim Thatcher, co-author of Web Accessibility: Web Standards and Regulatory Compliance and a noted accessibility expert, has a great way to think about accessibility. He says, “I’ve gotta be able to (1) get there, (2) know where I am, and (3) know what I can do.”

This sounds a lot like good usability. Maybe focusing on accessibility objectives can help surface UX issues and offer ways to address them. We can go beyond the “accidental benefits” of accessibility and improve a product or site’s overall usability through accessibility insights. In the process, we might make our sites more usable for everyone.

Here are some examples:

Reduce Cognitive Burden and Boost Learnability

As user experience designers and developers, we still make users think too much. It’s not usually intentional, but it happens.

For instance, a woman who uses a screen magnifier is looking at your financial services website. She magnifies her screen twenty-six times (26 X), so instead of the usual 960-pixel wide viewport, hers is just 37 pixels wide. With such a small space visible, she needs context embedded in each screen element to find a relevant link. Ambiguous links such as “More,” “Click Here,” or “PDF” leave her wondering, “What is this? Is this the one I need?”

A person who uses a screen reader isn’t much better off—he can get a list of links on the page, but will only hear “Link: more, Link: more, Link: more.”

Imagine browsing a site with clear links like “Financial Advisory Services” and “More News” and “More Market Analysis” versus one that uses the vague “More” for every link. With clear labels, you don’t need to look at the surrounding context to see where they go. Vague links make users think harder than they should.

When users visit a website, they should be able to easily answer these guiding questions about each element they see:

  • What is this thing?
  • How do I use it?
  • Do I have the information I need to decide if it is what I want?

Build context and information into each element on the page, so that it is easy for users to answer these questions, whether they are doing a quick scan, just looking at a small area of the screen, or listening to a screen reader.

Simplify and Reorder to Improve Efficiency and Effectiveness

Does the design look bloated, filled with too many controls, widgets, and text? In the spirit of accessibility, ditch your mouse and use the keyboard. Many forms of assistive technology use the keyboard to navigate. Extraneous labels, widgets, icons, and controls are not just visual clutter: they are obstacles for people navigating sequentially using a keyboard.

When we designed an online budget application, we needed a place to display information for each budget line. With six to eight fields per line, it would take a keyboard user quite a while to tab through them.

When we looked at it that way, we noticed a bigger visual problem: with a screen full of details about the budget all shown at once, nothing stood out in the “display everything” approach. For users viewing the page without assistive technologies, there was no weighting or visual importance, so scanning for key information among all the “noise” was hard.

We changed the design. Once the user entered a budget line, the information was added to a summary table with details hidden until they expanded that line. It’s more efficient and effective for all users: they get the key information needed to select a record and can spot things quickly. Efficiency is even more important for people who use assistive technology.

The solution: move the elements, reorganize them, remove them, hide them—do something. If the user doesn’t know why it’s there or it’s not important, it’s just more noise and makes their job harder.

  • When the goal of a product is to improve efficiency and be more effective, users should be able to answer these questions:
  •  What’s important for me to see or know on this screen?
  • Do I have the information I need to pick the right item?
  • Is this important for my task?
  • Has anything changed since last time I was here?

Give these issues and questions a thought on your next project and you’ll find better ways to design and build it.

Multiple Skills: The Ultimate UX Career Path Expander

It may be true that a jack-of-all-trades is a master of none, but for a UX professional, extra skills can boost a career.

When I was a hiring manager at Apple, Amazon.com, and Yahoo!, I found the most compelling candidates to be those with a mix of skills. They tended to contribute out-of-the-box ideas, communicate well across functional groups, and fill more than one role during implementation. When I followed some of them through their subsequent careers, I noticed that they tended to take unconventional but very successful paths.

Common Skill Combinations

Throughout my career, I’ve met UX professionals who have mastered multiple skills. This diagram reflects my cumulative impression of the skill combinations that a UX professional is most likely to have. Skills in the same bubble (such as user research and accessibility) tend to go together most often; those connected by a direct line somewhat less.

In the center of the diagram is product leadership. Some companies use that term to describe the skill set that qualifies a person to cross-functionally manage the design and development of a product. Experienced product managers usually fill such roles. But when an available manager in engineering or design has more product development experience than anyone else on the team, a pragmatic executive will often choose that manager to lead the project.

Diagram: Product Leadership in center
Figure 1. Product leadership combines many skills: interaction design, info architecture, visual design, animation, prototyping, software engineering, user research, accessibility, analytics

Successful People with Multiple Skills

The case studies in this article focus on the role that multiple skills played in the achievements of six of my former Yahoo! colleagues. My motivation for sharing their stories is to inspire others to expand their career options by acquiring and applying multiple skills.

Matte Scheinker

Matte Scheinker

When I first met Matte, he led the Messenger User Experience Design (UED) team at Yahoo!. He is now chief product officer at YouSendIt, a private company in the digital file delivery space. In a recent interview, Matte talked about his evolution from psychology major to product management executive:

“My first job in the industry came when the small membership association I was working for needed someone to maintain their website. I leveraged my writing skills to get the assignment. I then used my dormant coding skills to get up to speed on HTML, JavaScript, and CSS. While building the website, I discovered a love for the user experience part of the process. I merged my academic training in psychology with coding to convince a startup to hire me as an interaction designer.

It was at the startup where I learned the basics of interaction design from some amazingly talented and generous people. When that startup needed to do user research, I was able to bring my skills from working in a research lab in college and helped run hundreds of tests.

When the dot-com bubble burst, I got my dream job at Yahoo! where many more brilliant and kind people gave me a master class in interaction design and taught me the basics of visual design. I jumped at the opportunity to take a leadership role and quickly realized that my business skills weren’t good enough to effectively communicate with my business partners. Again, I relied on the generosity and patience of my colleagues who walked me through spreadsheet after spreadsheet until I began to understand.

Gaining all of those skills was just what I needed to get my next job: helping to instill design thinking across diverse functions at AOL. I led a team that set hiring standards for product management and design, set product standards, conducted product reviews, and created user-focused educational opportunities. Members of my team acted as PM/designer/developer on a number of high-profile projects.

My current role as the chief product officer of YouSendIt requires the totality of the skills I’ve gained throughout the years. In interacting with colleagues across the organization, I play the role of Babelfish, giving me the opportunity to insert design and user-focused thinking into every conversation. Ten years ago, people with my background weren’t considered for roles like this. Now I’m one of many people flexing user experience muscles to shape the way their companies envision and build products.”

Irene Au

Irene AuAfter two years as an interaction designer at Netscape, Irene Au joined Yahoo! in 1998. As manager of user interface design and user research, she introduced usability testing, field research, and proactive design methods to the company. She moved up to director before becoming Yahoo!’s first vice president of UED. Later, she led a UED-like group at Google.

Irene is now in charge of product and UX at Udacity, a startup in the higher education space. She also teaches yoga. She told me how several of her skills have boosted her career:

“My technical background (B.S. in Electrical and Computer Engineering) gave me credibility with my engineering colleagues and enabled me to come up with practical design solutions that could be built.

My training in interaction design (Master’s degree in Human-Computer Interaction) taught me problem solving skills for design.

My experience in conducting user research helped me better understand the users I was designing for and be a more effective designer for their needs.

Soft skills such as collaboration, negotiation, persuasion, and facilitation helped me get desired outcomes in a positive, healthy way that makes everyone feel involved and good about decisions. Being a good designer requires you to be versatile enough to understand what the overall strategic goals are, the technical underpinnings of the product, and what users need. Being able to have all perspectives requires patience, practice, and empathy acquired through listening and curiosity.”

Tom Chi

Tom ChiWhen Yahoo! hired Tom Chi in 2005, he was best known for OK/Cancel, the satirical UX comic strip that he and Kevin Cheng created. But Yahoo! hired him for the breadth of his skills. As of this writing, Tom’s LinkedIn page lists fifteen fairly distinct skills.

Tom walked me through his career journey. After gaining an academic foundation in astrophysics, he earned a Masters degree in the more practical field of electrical engineering. He needed software to test his hardware so he learned to program. That led to a software-consulting role at Trilogy. There he picked up project planning skills that landed him a program management position at Microsoft.

In his five years at Yahoo!, Tom grew from a multi-talented designer to a senior director of product and user experience. At Google, he built an advanced development team. Recently, he founded a non-profit to advise social entrepreneurs.

Tom shared how he applies his non-UX skills to UX-related work:

“Astrophysics gets you to quickly assess unknowns and find a path to make them known. Because you work over so many orders of magnitude, it builds a sense of perspective. In business, perspective of relative scale helps you decide which issues are worth arguing about.

Electrical engineering helped me learn interaction design. Electrical engineering is largely concerned with signals and their amplitude, frequency, and noise. I transferred my understanding of signals to my work in design. The frequency of use of a feature meant it required quicker access. The size of an item was a kind of amplitude. Anything that reduced the clarity of the user’s intention was like noise. I was able to map a good deal of signal processing to interaction design and quickly get up to speed.

In product management, you learn how parts interact, how ideas lead to outcomes, and how to specify, schedule, and resource a project. Product management taught me to step back and look at a project or design strategically, plan what conversations to have, and effectively communicate what needs to get done.”

Luke Wroblewski

Luke WroblewskiLuke Wroblewski is a well-known UX author and speaker. The list of specialties on his LinkedIn page includes product strategy, information architecture, and usability, as well as visual, information, and interaction design for web and mobile products. He founded a social product list sharing company and sold it to Twitter when it was barely in beta.

Luke told me about four high-level skills that he applies in every role:

  1. Pattern recognition. People make sense of what they see by recognizing the similarities and differences between visual elements. Through the process of visual organization, designers manipulate these visual relationships to create meaning. This requires an intimate awareness of visual patterns and the ability to manipulate those patterns to tell a structured story.
  2. Storytelling. The design of products requires effective communication with an audience. Each product’s interface needs to ‘tell’ people what it offers them and why they should care. This requires the ability to explain and persuade not only with logic, but with emotion. In other words, it requires storytelling.
  3. Visual communication. In order to effectively communicate meaningful stories, designers need to manage the prioritization and relationship of visual elements. Exposing these relationships through visual communication enables people to easily interpret complex information and its implications.
  4. Empathy. Design methodology focuses on the perspective of customers and end users when analyzing and crafting solutions.

Christian Crumlish

Christian CrumlishYahoo! hired Christian Crumlish in 2007 to curate its internal and public pattern libraries. His experience as a book author who had written about web design and information architecture made him a prime candidate for the role. Later, at AOL, Christian acquired product leadership skills. He is currently director of product management at CloudOn, a startup that enables tablet users to access standard productivity applications.

Christian’s UX-related skills include information architecture, interaction design, product leadership, and “voice” (copy, language, nomenclature, etc.). Like Luke, he cited storytelling and visual communication as key high-level skills.

Here is a list of some of his skills and ways in which he acquired them:

  • Storytelling and verbal communication: Practice in grade school and college.
  • Sketching: Years of experimenting as a child.
  • Detecting miscommunication: Working as a messenger at a printing company.
  • Information architecture: Independent exploration.
  • Interaction design: On-the-job experience as an extension from IA.
  • Product leadership: Projects undertaken while on the consumer experience team at AOL.
  • Prototyping: Hands-on experience throughout his career.

Frank Yoo

Frank YooFrank Yoo began his career as an art director at the University of Michigan, eventually transitioning into UX-oriented roles at diverse companies. During his five years working at Yahoo!, he served as UX design lead on several high-traffic sites. Next, he spent some time working in a mobile design lead role at LinkedIn, after which he became product lead for the ride-sharing startup Zimride/Lyft.

In addition to presentation and written communication skills, Frank highlighted the benefits of mastering design tools:

“Having greater control of your creative facility not only saves time when tight deadlines loom, it enables free expression of your ideas across multiple skill disciplines, as well as a pure application of your problem solving skills as a designer. It’s like the relationship between musicians and their instruments.”

Frank explained the path that he took from design lead to product lead:

“When armed with good data and clear product goals, I found it easier to create more focused and powerful user experiences. When I arrived at Zimride, I asked to take on the product manager role (while maintaining design lead duties) for the mobile team, and I was granted that opportunity. I think my design background gives me valuable perspective on product development because I’ve been conditioned to be an advocate of the user. As a product manager, I still have the opportunity to exercise my creativity and build great experiences, just in a more holistic way. In my current role, this boils down to managing priorities, keeping product goals aligned with company goals, and shepherding overall product execution.”

What Does This Mean for Your Career?

Did reading these stories get you thinking about how additional skills might boost your own career? If so, consider approaching career planning as a design exercise. Where do you want to be in five years? What skills could help you get there? Which do you already have? Which do you need to acquire?

Sketch out several scenarios on your own or with a friend or mentor. Weigh the pros and cons of each scenario. Pick one that suits you. After choosing, remain open-minded because unanticipated opportunities may arise, as they did for our storytellers.

Here are some ways a person with a full-time job can acquire additional skills:

  1. Take courses offered by your employer or a professional association.
  2. Learn an analytical subject like computer science or statistics tuition-free through an online university. You can learn at your own pace at Khan Academy or Udacity, or enroll in a scheduled course at EdX or Coursera.
  3. Take graphic arts classes offered in your community.
  4. Maintain a website for a non-profit or manage a redesign of their site.
  5. Back at the office, work closely with someone who has a skill you want to learn and who wants to learn a skill from you.

No matter how you decide to become a multi-skilled UX professional, doing so could enrich your career in both expected and unexpected ways. And consider this bonus benefit: when you bring your UX sensibilities to other roles, you may be in a better position to make products more user-centric. Isn’t that why you entered the UX profession in the first place?

 

What You Should Know About the Semantic Web

Take any typical web page. What is it about? You interpret the meaning of its words and illustrations, and use that information to decide your next action.

When your browser reads that web page, it is only capable of a simpler “mental” process:

  • It has an address from which to collect data
  • It uses HTML to display the contents
  • It assembles the data in some representation based on display formatting
  • It has links to other addresses—other pages of data—it can go to

The computer sees the presentation but cannot recognize meaning. Likewise, you cannot tell your browser your situation, goals, or anything else that might help it behave more intelligently. It cannot be an agent on your behalf, locate novel information, or filter things out. It cannot make the experience easier.

We resolve these problems by coding “intelligence” into applications via user preferences, complex search algorithms, error trapping, task flows, and navigational signposting. These work as long as we have the resources to carefully create stand-alone applications.

What is the “Semantic Web”?

The Semantic Web vision is about embedding additional layers of meaning in web data, codifying those data in a standardized way to enable sharing, and then facilitating the creation of software that use those data for more than just presenting an attractive display. The goal of the Semantic Web, described by Tim Berners-Lee and his colleagues in Scientific American (May 2001), is “better enabling computers and people to work in cooperation.” This definition demands the involvement of HCI and usability.

The World Wide Web Consortium (W3C) is defining the “building blocks” for the Semantic Web, focusing on data formats, ways to describe relationships, and processes that make it possible to share more meaningful data across the Web. Significantly, in September 2006, “User Interaction” became a major architectural element, reinforcing the importance users will play in the Semantic Web’s growth.

What are the Usability Opportunities and Issues?

Let’s consider how we can enhance some common web activities:

Browsing for Information

If you have shopped online in the past two years, you have probably used “facets” for filtering—each selection from a set of categories narrows your list of items progressively, and they can be selected/de-selected in any order.

You can also browse interwoven sets of information using facets, creating a dynamic, user-driven navigation of subjects. This type of browsing requires clearly described relationships between data. If I browse a museum collection for French sculptors, I would see Rodin and his statue The Thinker. Faceted browsing means that if I start instead with figures of men, cast in bronze, by famous artists, I would still arrive at The Thinker, via a path that aligns with my mental model, experience, and task. Once there, I can navigate in another direction to get additional information about Rodin, his nationality, etc. Flamenco, mSpace, and Exhibit are among the projects working on this form of interaction.

This can be done in other ways, but the Semantic Web’s focus on relationships between data and common data formats facilitate this interaction, and means that anyone can share their data into the collection with little or no extra work. If someone discovers biographical information about the person who posed for The Thinker, it can be linked to the collection without reworking the data model. If there are descriptions of sculptors in other languages, they can be integrated into the same application—a property that is great for localization. Semantic Web structures means the data are available for more elaborate uses in the future as technologies develop.

However, if faceted browsing becomes commonplace, what interaction norms will develop? How does the user turn facet selections on and off, or rearrange navigation sequence? How many facets and terms should be presented, and in what layout? What labeling constraints should we apply? How much can be presented without confusion?

Performing Complex Searches

While search tools are more widely used, more flexible, and easier than ever, they also strain under the weight of so much web content being created every day, and more sophisticated users demanding more relevance in results. Search works well when the goal is either a specific, known item or an overview of a subject where the specific information is less important than finding (ital)something.

But try some of these searches in your favorite search engine:

  • What are the capitals of the countries bordering the Pacific Rim? Which have had changes in government in the last three years?
  • What Greek restaurants are open after 10:00 P.M. within two blocks of the Scandinavian modern furniture store on Georgia Avenue in Washington, DC?
  • What are the problems with that new migraine treatment according to official sources?

One focus of Semantic Web researchers is integrating logical models of the subject domain with the search process. The goal is to interpret the meaning and relationships in the sets of terms that people use in order to enhance search result relevance, as well as to synthesize concepts and information from sources on different subjects to solve complex problems. To tackle the questions above, search engines must traverse underlying relationships between elements in the question. Hitting the collection of words is insufficient.

What are the usability challenges? How might natural language interaction feel? What will user expectations be while users are unfamiliar with such techniques, and then as they become familiar? How does an application interpret the user’s language and intentions? How can applications expose assumptions or interpretations to the user? Does greater transparency mean greater trust? Can scenario-based approaches help model situations and goals? If the “answer” is not complete or clear, what is the nature of the “refinement conversation?”

graphic of stacked boxes
Figure 1. Tim Berners-Lee’s “Layer Cake”

Using Vocabularies and Descriptions

Recently, I provided navigation and search methods to a collection of medical documents used by non-experts. On the web I found extensive, standardized medical vocabularies that could improve the information’s descriptions and searchability. I avoided inventing more than 20,000 terms (plus synonyms, misspellings, etc.,) and their relationships. I did not have to arrange for code to be written to digest this terminology. Client staff will not have to maintain it over the years. Even better, when somebody wants to integrate what they find in this resource with international medical journals, they need not start over and spend time coding and entering data. The data and relationships are consistent, and both commercial and open source tools will be available.

I still must edit and arrange more than 20,000 terms (and 3,000 non-medical terms) in a coherent, efficient way. How do I look at that many terms to see which ones I must refine, or to understand the cross-relations? I need new forms of interaction to navigate, edit, connect, and review large amounts of terminology. Fortunately, there are projects both inside and outside the Semantic Web community that look at these visualization and navigation challenges.

The more consistently we represent our data and develop tools that interact with the data, the better off we will be. We all need more usable tools, and more imaginative representations to navigate large sets of vocabulary and logical relationships.

Using and Interpreting Context

There are several aspects of the Web and computer software that seem “dumb.” Why do they ignore my current goals? Why do they forget my prior experiences and expertise when filtering what I do or look at? Why will they not recognize what I already did so I can skip redundant steps? Alternatively, if I alter my usual routine, will they stop suggesting that I follow inappropriate paths based on prior actions?

We need a “language of context” that creates relationships between experience, tasks, preferences, situations, and desired outcomes. Even simple things like filtering searches by location, situation, and experience level could make the Web much more relevant.

What formats should describe context so machines can understand these relationships? How do we encode what we know from one situation for use in another situation? These questions can be explored by Semantic Web researchers, but they must first be asked by people focused on the user experience.

Instructing Agents and Automated Tools    

Do you change your preference settings in your favorite portals or news feed sites? I rarely do, although my interests and information needs change. Is it laziness, lack of engagement in my information choices, or usability problems? Could it be weariness with software that regularly reminds me I must pay attention and make several choices among several items I only partly understand?

While we might prefer using autonomous and semi-autonomous agents to reduce the time we spend on tedious tasks, we must consider how to maintain transparency of actions and a reasonable balance between knowing what agents are doing and being pestered by them.

As you work with various computer applications and websites, what are they doing and how much do you trust what is going on behind the scenes? Can you interpret what logic and rules agents apply and the outcomes they deliver? Is your privacy respected by the organizations to whom you give information and by those they give information?

The more that data are integrated, the more that different, separate pieces come together to create interesting patterns. How can we know when this occurs? Might it be possible in the future to create guidelines and rules that are integrated with our data, so when those data are used the rules are known? Is it possible for information about us to be available on the Web not as isolated records, but via agents whom we instruct with our expectations on use?

These challenging questions hit at the heart of user expectations, trust, and confidence. They are the kinds of questions that projects like the Policy Aware Web look at. They are not isolated to the Semantic Web, but arise now as we use the Web. As usability professionals, this is an important area to act as advocates for the “voice of the user!”

What if the Semantic Web Never Arrives?

The Web was architecturally simple but took years to develop. The Semantic Web is more complex, and will take many more years to develop. There is always a possibility that it will never “arrive” in the form as currently described.

Does that mean we should hold back?

No. The Semantic Web aims to solve problems that will remain no matter what technologies are finally deployed or what label is put on them. We should work on analysis, design, and evaluation concepts for:

  • Visualization and navigation of large-scale web data
  • Refined search and browsing experiences, with knowledge of context and goals
  • Increasing personal interaction—the Web coming to me, in my current context, proactively supporting my goals and facilitating my relationships
  • Agents that automate routine tasks with permission to talk to other agents on my behalf
  • Greater collaboration and sharing with others

Closing Thoughts

Early sites show promise using Semantic Web data formats and technologies. The Semantic Web aligns well with the range of new web techniques (such as “Web 2.0”) that seek to transform our online experience.

It will take time to address some of the challenges identified above. The Semantic Web community—both researchers and professional developers—seek an understanding of usability and design. We want them to make decisions and create interaction approaches that are usable. The usability community must be part of the Semantic Web conversation at the earliest possible stage.

For more information on the Semantic Web:

Studying UX Online: Advantages, Difficulties, and Best Practices

The Internet is revolutionizing education, and UX education has begun moving online. Students from all over the world can study UX via online courses that provide them with valuable skills needed to work in today’s world of distributed teams. Online education offers a number of advantages to students, instructors, universities, and employers, often allowing students to further their educations while continuing to work full time (Figure 1).

A photo showing a student’s face on a large screen, while other students watch from their seats.
Figure 1. Students in today’s programs can attend classes remotely. Credit: The Academic Technology Center at Bentley University.

However, the field of online education is still relatively new, and there is a widespread perception of online programs as diploma mills. In light of these circumstances, we wanted to examine the current state of online UX education, learn about the advantages and disadvantages of studying UX online, and collect best practices from UX students and instructors.

Defining Online UX Education

What exactly is meant by online UX education? When speaking with people for this article, we encountered a range of online experiences, from formal master’s programs, to single UX classes within programs in a related field, to continuing education resources such as All You Can Learn and Lynda.com. We focused primarily on the more formal programs, but even here, there was some variation. For example, Bentley University’s Master of Science Human Factors in Information Design (HFID) is synchronous, and students are expected to attend lectures and participate in class discussions via an audiovisual system facilitated by teaching assistants in each classroom (Figure 2).

A photo showing the computer monitor screen also shown on the large display screen.
Figure 2. A student assistant watches a monitor to be ready to assist remote students. Credit: The Academic Technology Center at Bentley University.

At the other end of the spectrum, Kent State University’s Master of Science in User Experience Design is asynchronous, while DePaul University’s Master of Science in Human-Computer Interaction allows students to either attend lectures synchronously, or listen to recordings of the lectures later on. We spoke with students from all three of these programs, as well as four instructors—two from UX masters programs and two who had taught a UX course as part of a program in a related field. Some students were located in different countries and time zones from those in which the classes were conducted.

Advantages of Studying UX Online

The primary advantage of studying UX online, rather than in a traditional classroom, is the unparalleled flexibility and convenience offered by online programs. While all of the formal programs we looked at are structured around a weekly timeline, students are able to fit their assignments in around the other “large rocks” in their lives, including their jobs, spouses, children, and other responsibilities. Alesha, a recent graduate of Kent State, notes, “I was still my kid’s mom. I was still my husband’s wife. I was still my boss’s employee. I didn’t have to put my life on hold.” Two other students said that although they technically live within commuting distance of Kent State, having to commute to campus one or more times a week, on top of completing the coursework and their other responsibilities, would have made doing the program not feasible for them.

In addition to flexibility and convenience, online programs also provide geographical convenience, opening up a range of possibilities for overseas students and for domestic students who don’t live near a school with an on-campus UX program. “It lets me attend a reputable program without having to move across the country,” says Sara, a new student in the DePaul program. “Having an 11-year-old daughter in middle school, that would be a big upheaval.”

Jennifer, a Bentley alumnus, is even more appreciative of geographical flexibility: “For me specifically it was great because my husband is in the military and I knew I was probably going to end up relocating partway through the program. It gave me the flexibility to take the program no matter where I ended up moving.”

Difficulties and Challenges of Studying UX Online

Along with its advantages, online UX education brings a number of challenges for students. We primarily focused on social challenges, challenges with collaboration, cultural and time zone-related issues, and challenges with software and hardware tools used for class.

Social challenges

One of the primary characteristics of online education is a lack of in-person interaction with instructors and classmates. As Dr. Bill Gribbons, director of the program at Bentley Universty asks, “How do you preserve that sense of sharing and support that occurs in the real world? How do you preserve that in the virtual world?” Similarly, Dr. Fast, who helped build the program at Kent State, notes, “You need to give people the opportunity to bond. You can sometimes learn more from watching someone walk across the room than you can from sending them 12 emails.”

Many of the students we spoke with felt this lack keenly, particularly with respect to instructors. Kaycee says, “Communication with professors is never done in person, so you lose a little bit of that interaction with professors. For my first master’s, I felt close to them; we still email and talk; but I don’t know any of my professors from my UX program. I have no connection to them. I wouldn’t ever email them and see how they’re doing, whereas in my first master’s, I felt very connected to them, we worked together on a daily basis, so I do feel like I missed that connection.”

However, lack of face-to-face interaction did not always result in students feeling disconnected. Students in Dr. Janice Redish’s class felt so close that one West Coast student arranged to meet a fellow classmate while on a business trip to the East Coast. Dr. Redish herself had lunch with three of her students, as well as one of her co instructors, while on a trip to Vancouver. She says, “I think that happened because there was so much peer review interaction as well as instructor interaction during the course, even though it was entirely asynchronous.” Other students feel that they have gained mentors; one even said that she had made lifelong friends from her UX program.

There was also a range of reactions to the feeling of lack of connectedness. One Kent State student felt that it was partially her own doing, stating, “I’m somewhat of an introvert, so if I made more of an effort to reach out, who knows? But maybe we’re just a class of introverts.” Jennifer from Bentley adds,  “When I talk to a lot of my coworkers who have done graduate programs online, my program seemed a lot less painful with just as much knowledge, and had professors which—while we didn’t actually connect—I actually liked.”

Cultural and time zone-related challenges

Online UX classes have diversity in age, gender, location (both within and outside the U.S.), and levels of technical sophistication and professional experience. The programs we looked at include students ranging in age from early 20s to early 50s, and approximately equal numbers of men and women.

Students overwhelmingly viewed geographical diversity as a positive aspect of online courses. For example, Alesha from Kent State was pleased that the geographic makeup of her classes allowed her to collaborate with people from different subcultures within the U.S., and also provided her with unexpected insight into how people from different regions interacted with a particular technology one of her class groups was researching.

However, differences in technical sophistication levels were seen as more of a challenge, given online students’ reliance on computer-based communication systems. This variance can be frustrating for those with more, or less, technical experience. One alumna with greater technical sophistication stated, “With that technical divide, we had to show a few people how to sign up for a Google account.  That was scary—having to walk someone through how to set up a Gmail account and use some of those tools—because to me they’re very intuitive.” At the other end of the spectrum, lack of confidence in one’s own technical skills can present a psychological hurdle for some students, particularly in hybrid classes. As one put it, “I guess the most difficult part was the technology; as well as being an enabler, it was also a barrier. My anxiety level for the first couple of classes was just tremendous, thinking, ‘Am I going to be able to use the audiovisual software? Am I going to screw up? And do it publicly?’”

In addition to this disparity in technical sophistication, there was also some variation in professional experience. Levels ranged from very low—students entering graduate programs straight from undergrad or from completely different fields—to those who had worked in the field before starting graduate school. A couple of students with no prior experience in UX felt that being in class with more experienced people was intimidating at first.

Surprisingly, time zones presented few issues for the students we spoke with from the U.S. Those who did cite time-related issues referred to difficulties in arranging group meetings because of differing schedules among group members. For example, parents with full-time jobs were only available later in the evenings and on weekends, while students with part-time jobs preferred to meet during the day and keep their weekends free.

Collaborative tool challenges

As described above, differing levels of technological expertise within an online classroom can pose challenges for students. The practice of UX is inherently collaborative, and effective use of online tools is essential to any seamless collaboration between remotely located individuals. In addition, the tools themselves can cause issues with collaboration.

For example, Blackboard is frequently used as a class tool that many students are satisfied with. “I was never unable to accomplish what I needed to do,” says Laura P. from Kent State. “It wasn’t missing features that I needed; the features it has could be executed a little better, but I never found myself having to work around a missing feature or wishing it had features that it didn’t have.”  Another student who had used a similar online class tool appreciated Blackboard for its lack of spam and its ability to manage all of her classes in one place.

However, a number of students and instructors expressed extreme dissatisfaction with the tool. One instructor says, “I hate Blackboard! I find it to be one of the worst pieces of software I’ve ever used. I get lost, and I think, ‘I design this stuff and I can’t figure it out’.”

While Blackboard is a common tool for collaboration, it is not the only one.  Some programs use in-house tools, and students are frequently left to their own devices to choose collaboration solutions that work for them. Since these tools, such as Skype or Google Hangouts, are not necessarily sanctioned or supported by the universities, technology frustrations  can lead to further feelings of disconnection between fellow students or between students and instructors. This is particularly true for online students who rely on these tools for communicate “live”. When they don’t function properly and are not supported, students feel a void in this very important means of communication.

Best Practices

The students and instructors we spoke with had a number of suggestions for best practices, and we were able to glean some from our own experiences as well.

For students

Online study is not for everyone; in speaking with students and instructors, there are some clear characteristics that make for successful online students.. Dr. Gribbons states, “I think online students know they have to be on top of their game. A successful online student is one who is extremely focused, time and project management oriented, takes advantage of every opportunity for support and feedback, and errs on the side of caution in asking for help; as a result, their performance is exceptional.”

In addition, successful online students tend to be proactive in making contact with instructors and take the initiative in participating in additional activities like professional conferences and networking. Dr. Gribbons says, “When I have office hours, even though only about a third of our students are online, more than half my office hour appointments are online students—and not because they’re struggling more—it’s because they want to be engaged.” An alumni from one of the programs states, “At the beginning of each class I reached out and connected one-on-one with the professor so that then when I had a question later on in the course, he or she knew that I wasn’t some last-minute floundering newbie going, ‘Help!’ So I made a personal and professional connection with every professor at the beginning of the class, and that helped; when I needed answers, I got them quickly.”

For programs

A number of best practices for programs emerged from our discussions with online students and instructors. Based on their input, we offer the following recommendations:

  • Limit program and class size. Bentley and Kent State both limit their programs to approximately 150 students, and cap class size at 25-35 students. Institutions with asynchronous programs in particular may be tempted to increase revenue by admitting more students and increasing class size, but Dr. Gribbons feels that, “what is lost is learning for the students, and meaningful engagement.”
  • Consider adopting a cohort model to some extent, to foster connection between students from one class to the next.
  • Provide at least a few days of in-person interaction, if possible. Failing that, encourage students and instructors to interact face-to-face via collaboration tools such as Google Hangouts. Dr. Gribbons attributes the engagement his online students feel, in part, to Boot Camp, a one-week residential program that all remote students must complete. One Kent State graduate wishes her program had included an in-person component.  “Once the head of the department mentioned that he wished there would be an optional weekend where everyone could come to Kent; that would have been great.”
  • Provide synchronous interaction when possible. Synchronous activity can be invaluable in, as Dr. Gribbons says, “giving connection to a user community”  and creating a sense of connection between classmates and instructor. It can also be helpful for instructors to check whether students understand concepts. One instructor in an asynchronous program ended up scheduling weekly calls with her students because several of them were having to redo their class work.
  • Design a consistent weekly model for assignment due dates for asynchronous classes.
  • Online collaboration tools must be improved. As one of the instructors noted, classroom tools for design, in particular, are not as good as corporate tools.
  • Train students in the use of classroom and collaboration tools. A Bentley student praised her program’s synchronous introduction to the audiovisual system, and an instructor at another institution said she thought her program should consider “a foundational program where the students learn to use Google Hangouts so you don’t have to spend class time teaching them..”
  • Train instructors in the use of online teaching techniques and the use of technology for teaching online.  Kent State does this by flying new adjunct instructors to the university to do a 2-3 day intensive workshop on course design and learning how to use the tools.
  • When direct interaction is not feasible, encourage instructors to create explanatory videos and provide feedback via video. Kent State has adopted such a model.
  • Foster education about, and empathy for, online students among instructors and on-site students. One Bentley alumni succinctly expressed a sentiment shared by other students and instructors: “Room for improvement would probably be everyone learning best practices for online students. Everyone— including online students, including the professor, and including the students in the room.”

This article only begins to scratch the surface of studying UX online. Today’s programs are producing effective UX professionals. However, as technology improves, and institutions, instructors, and students work to address the challenges involved in online learning, studying UX online will hopefully become an even smoother and more effective experience.

[bluebox]

Tools for Study Online

A number of collaboration and UX tools were mentioned in the course of our conversations. These include:

General Collaboration Tools

Google+ Groups
Google Drive/Docs
Google Hangouts
Glassboard
JoinMe
GoToMeeting
WebEx
Skype

UX-specific Tools

Axure
Mural.ly
Boardthing
Screencast-o-matic
Loop11
Optimal Workshop

[/bluebox]

AUX3: Making UX Research Track with Agile

Agile and user experience (UX) have been partnered successfully since the Agile Manifesto was first authored in 2001 and have had a history of working well together in many situations. However, user experience research (UXR) has been largely unaccounted for in this work. The Agile Manifesto encourages us to revisit our processes and to iterate on ways to better describe how we work together with our teams. That is the goal of this article: to share our progress toward improving how we work by documenting an iteration of a model for you to try.

The model we are introducing—AUX3 (Agile UX with 3 Tracks)—explicitly defines and supports the time and effort needed for the full UX cycle. We provide evidence in this article that AUX3 embraces the complexity of UX while keeping up with the fast-moving train of Agile.

What We Hope to Improve Upon

Back in 2003, Gary Macomber and Thyra Rauch (“Adopting Agility” at USE 2003) described and sketched out the intertwining of UX and Development during an Agile process. Shortly thereafter, Lynn Miller mentioned “interconnected parallel design and development tracks” in her 2005 paper (presented at the Agile Development Conference). In 2007, Desirée Sy identified two tracks—Interaction Designer Track and Developer Track—in her seminal paper “Adapting Usability Investigations for Agile User-Centered Design” published in the UXPA Journal of Usability Studies.

A process diagram of dual track agile/UX work cycles.

Figure 1. Original diagram of two track efforts created by Lynn Miller (copyright 2007).

Sy’s paper described UXR and interaction design work taking place in the Interaction Designer Track, which is exemplified by Lynn Miller’s diagram in Figure 1. The Sy and Miller model has been exceptionally helpful when utilizing UX within Agile environments, and it has been taught for 12 years. However, over time, we have seen this oversimplify the time and resources struggle between learning about users and creative problem solving.

How We Are Expanding the Approach

In AUX3, we propose organizing UX work into three tracks to expose the three different types of work: Learning (research methods such as ethnography), Problem Solving (wireframes, interaction design, and so on), and Execution (visual design, design language development, and so on).

An illustration of the three tracks in AUX3, with their respective roles for larger and small teams.

Figure 2. AUX3 Tracks: Work conducted in the Learning Track informs the Problem Solving Track, and then work moves to the Execution Track. (Credit: Mckenzie Neenan, 2019)

By explicitly separating these activities, teams are able to better discuss what work is needed to solve the right problems for users in a UX and Agile supported framework. First, we define the three tracks and then describe how the work is done. Throughout, we refer to iterations (many teams call these sprints or cycles), and, in this article, we define these periods as being two weeks in duration (although these durations do vary by team).

Learning Track

In the Learning Track, UX research is conducted ahead of the Problem Solving work. This work, as Sy mentioned in her paper, is focused on generative learning. It informs the upcoming Problem Solving Track and includes strategic discovery through ethnography for newer projects, rapid UX research (such as interviews to answer specific questions), validation with prototypes, and/or usability testing on work that is in progress or has been completed. This track may also include activities such as task analysis and journey mapping, and the people working in this track will most likely need to work closely with people involved in the Problem Solving Track. The goal of the Learning Track is to conduct research to inform the team and evaluate progress. This work may be conducted two or more iterations ahead of the Problem Solving Track and informs all future work.

It is worth noting that with larger teams there may be dedicated people working on Learning aspects across multiple squads for more strategic efforts. Furthermore, these dedicated people sit outside this whole process as their work does not fit into a single or even multiple iterations. Their work is more overarching in scope and longer lasting in duration, as it looks ahead to inform business decisions more so than individual design and development efforts. This work could include research on open questions about large topics, a new pattern that needs a lot of time to create, or areas that span multiple products.

Problem Solving Track

Problem Solving is focused on creating pieces to communicate solution ideas to development by using wireframes, prototypes, and other creative activities to define solutions to a problem. This work sometimes overlaps heavily with UXR. Problem Solving efforts need to be conducted no more than one to two iterations ahead of the Execution Track to ensure that the designers do not get too far ahead. When Problem Solving is done too far before execution, it can leave the design and technical teams out of sync. This ultimately leads to a less effective end product.

Execution Track

The UX team needs to partner with the technical team and help them to make immediate implementation decisions. This is done by communicating about the interaction designs (not just passing along final designs) and working to answer questions as they arise. The UX team in this track is focused on using existing design language and style guides to inform the work (in some cases developers can do that portion themselves) and enhancing those patterns as new needs are defined. This work is conducted in sync with the technical team, and ideally the visual and/or UX designer is able to support the team in person or virtually.

In some cases, at this stage, the team may realize that an experience is not able to be supported and additional problem solving is needed. In this type of situation, the UX design resource(s) may need to join the Execution Track for short-term support.

A grid demonstrating the different activities in each phase through several iterations of a project.

Figure 3. AUX3 Process of moving through iterations step by step with information acquired in the Learning Track informing Problem Solving, and then work moving to the Execution Track. (Credit: Mckenzie Neenan 2019)

Getting it Done, Iteration by Iteration

Since this work is conducted over the course of many iterations, it is important to clearly convey the cadence of the work. The following sections give further details.

Note: In this article, we are skipping the Cycle/Sprint Zero scenario as we find those to be, unfortunately, rare situations where greenfield work is being done.

Iteration 1

Learning Track

  • Gather information about users for Iterations 2–5 via user research.
  • Plan a usability study to be conducted on a prototype that the Problem Solving Track is creating.
  • Integrate the UXR to inform the active work so that designs can continue to grow with that knowledge.

Problem Solving Track

  • Lay the groundwork for design efforts.
  • Work with the visual designer to update/create the initial design language and style guide, and do any necessary IA work.
  • Create a prototype to be tested via a usability study.

Execution Track

  • Work with the technical team to implement low UI cost features that need very little UX design effort.
  • Partner with a technical team to understand and work through the high cost/time work, such as the underlying architecture.

Iteration 2

Learning Track

  • Continue research and set up a usability study for the next iteration (recruitment, and so on).

Problem Solving Track

  • Work on interaction design efforts for Iteration 3.
  • Finalize prototype for usability testing.
  • Work on the next problems to solve based on research that the Learning team provided.

Execution Track

  • Implement designs and make the validated interaction design a reality.

Iteration 3

This iteration brings more maturity to the team as they have started to better understand their own cadence and have made relationships across the teams.

Learning Track

  • Run a usability study on the prototype the Problem Solving team created.
  • Conduct UXR for future iterations, such as a field study to get more detailed information.

Problem Solving Track

  • Finalize the design effort for Iteration 4.
  • Integrate the research findings across the work.
  • Support the usability study to immediately inform current design and work on future problems.

Execution Track

  • Continue to work directly with the development team to implement new features and address both technical and design debt as needed.

How Do You Really Fit UXR In?

Fitting this work into an iteration cycle may seem daunting, but there are ways of doing it in smaller pieces. For example, if you conduct interviews with consumers, these would typically take two iterations as described in the following sections.

Consumer Interviews

This example assumes the work will be conducted in two iterations (two weeks each).

Iteration 1: Plan and recruit (assumes some pre-work is in place).

Iteration 2: Conduct interviews, analyze, synthesize, and report.

Usability Evaluations

This example assumes that participants have already been scheduled to support continuous and iterative development.

Iteration 1, Week 1: Plan and design study, create interview guide, review design, and so on.

Iteration 1, Week 2: Conduct study with pre-scheduled participants, analyze, and report.

Deep Interaction Problems

This example assumes the work will be conducted in two iterations (two weeks each).

Iteration 1: Create initial wireframes, iterate, and create clickable prototype.

Iteration 2: Conduct study with pre-scheduled participants, iterate, and create a mature design.

Working between Tracks

Constant communication between these three tracks is essential for the team’s success, such as by participating in team sharing sessions, sharing progress in tracking tools, having shared workspaces, sharing research results, and collaborating closely. Jeff Patton said in a 2017 blog “Dual Track Development is not Duel Track” (in reference to Sy’s paper “Adapting Usability Investigations for Agile User-Centered Design”), “tracks are not competing rather they must work together.”

There cannot simply be a hand-offs approach between the tracks. As with any good UX work, communication is required to make a great experience. That being said, a portion of the Learning Track work may result in discovering a lack of need. A portion of that work may not even result in prototypes. Learning efforts must be picked strategically and be constantly reviewed for priority alignment with teams so as not to waste efforts or work too far ahead. Strategic research priorities should more closely align with product management needs, and tactical research should align more with the design and technical teams’ needs. As the backlog is groomed and work is reprioritized, this affects the work that the Learning Track focuses on and vice versa.

Everyone works together, all at once, and everyone is aware of what the other disciplines are doing. This can be tricky, but constant communication about progress and priority will help the team get better after a few iterations. Work needs to be tracked in a way that is shared between team members and/or by an individual who keeps the team(s) aligned.

The UX team members are part of the scrum team, which helps keep everyone informed and on the same page. As part of this, during each iteration of the Execution Track, UX representatives need to be present (at minimum virtually present) with the technical people. The UX representatives need to ensure that they are a ready resource and that the UX team is aware of what is being built. Feedback should be provided on their work and vice versa on the Problem Solving efforts being conducted. Additionally, UX work should be tracked in a way that is visible and familiar to the technical team.

UX work that has already been created and/or prototypes should be consistently validated with user feedback through usability testing or live metrics. The Problem Solving team is working just a bit ahead, while the Learning team is a few iterations ahead, and everyone is consulting one another. Leaving egos at the door is imperative to success.

Staffing

As might already be obvious, this approach requires a team of people with a broad set of UX skills. It is almost impossible for a solo UX practitioner to be successful with this approach, and even two UX professionals will find this work very challenging. Where resources are slim, we recommend swarming individuals to work on high priority projects where success is most likely.

What Does This Mean?

The projects that will be most successful are ones where the leadership and stakeholders (product manager, team leader, and managers) understand the value of UX, and where the technical teams are open to collaborating across disciplines. It will take multiple iterations to get into a solid cadence with each other, and the teams need to be aware and supportive of this time constraint.

Ideally, teams will consist of three UX professionals: a strong UX researcher (UXR), a strong UX/interaction designer who can support UXR, and an experienced visual designer who will work with technical people and who can also support UX design work. Larger UX teams are beneficial, but need to be cautious about doing too much work ahead and potentially getting out of sync with the constraints the Execution team is working with.

Closing

The adoption of Agile has exploded over the past 20 years, and the dual track model has been an excellent way to expose and illustrate how these processes can work together. In the spirit of Agile and UX iterations and continual improvement, we believe that AUX3 furthers that model. AUX3 embraces the three different types of work needed to solve the right problems for users—Learning, Problem Solving, and Execution—and helps teams to organize around the work.

Artificial Intelligence and Chatbots—Creating Positive Experiences

In a broad sense, artificial intelligence (AI) uses computers and machines to simulate human decision-making and thinking. More modern definitions of AI describe it as the ability of a machine to generalize its knowledge and skills to new environments and to efficiently learn new skills or knowledge. Some current applications of AI include online shopping, facial recognition, speech recognition, and autonomous vehicles. This article will focus on conversational AI and the user interface considerations specifically for designing chatbots. A chatbot is an application of AI that simulates a conversation with a user using natural language processing through either text or voice communication. A digital or virtual assistant is a more complex form of a chatbot that can also complete tasks for the user.

AI and Popular Culture

The first thing that comes to mind when people hear the terms artificial intelligence or AI is often related to what they have seen in movies or read in fictional novels, such as the loyal and helpful droids R2D2 and C3PO from Star Wars or the sinister cyborgs from The Terminator. Although AI is portrayed accurately in some popular culture, many movies and books distort our reality about what AI is and its capabilities. These include incorrect assumptions that AI is so advanced that it can do anything a human can or that AI can act autonomously. Depending on which portrayals of AI we adopt can lead us to experience either positive or negative feelings toward this technology and can set unrealistic expectations about what it can currently accomplish.

Historically, a machine was an AI if it could perform a task that previously required human intelligence. This definition, however, was not constrained by how human beings might solve complex problems (for example, we do not consider 100 million possible moves simultaneously) and did not factor in human learning. The early techniques of AI included hard coded-algorithms or fixed rule-based systems. In the article “What Is AI? Here’s Everything You Need to Know about Artificial Intelligence,” Nick Heath suggests that more modern definitions of AI describe it as the ability of a machine to generalize its knowledge and skills to new environments and efficiently gain new skills or knowledge.

Most modern applications, including chatbots and digital assistants (considered AIs by many), fit this narrow definition of the ability to generalize their training to a limited set of tasks, such as understanding speech and recommending products for purchase based on previous purchases. The concept of machine learning, a more recent development in AI, neatly fits this more modern definition of AI. With machine learning, algorithms are trained using large amounts of data without relying on explicit rule-based programming. The algorithms identify patterns to handle more complex problems, such as image recognition or predicting future stock prices. Heath also discusses artificial general intelligence (AGI), whereby a machine can learn and execute a wide variety of tasks and can reason about various topics similar to human ability. This form of intelligence is commonly portrayed in popular culture even though many experts do not believe it yet exists.

Role of the Conversation Designer

The meteoric rise in chatbot use over the last decade has created a new breed of UX practitioner: the conversation designer. Conversation design is equal parts content writing and interaction design. AI has not yet reached a level of maturity where it can spontaneously create chatbot dialogue. Instead, content designers conduct research to

  • understand the domain the chatbot is expected to cover,
  • determine the purpose of the chatbot, including why users will be interacting with it, and
  • create the actual chatbot dialogue so that it is accurate and consistent with the tone and voice of the chatbot.

Anatomy of a Chatbot

Conversation designers create three types of information for a chatbot:

  • intents
  • entities
  • dialogue

Intents represent the actions that users want to take by identifying user intentions or goals. The scope of information that all the intents together cover is known as the knowledge corpus. For each user intent identified, a list of example utterances is generated to represent common ways the user could state their intention. The chatbot will train on the utterances to learn what requests are considered equivalent. For example, if the user intent is “I want to place an order for pizza,” then utterances could include

  • I want to order pizza.
  • I’d like a pizza.
  • I’ll take a pizza.
  • I need a pizza.
  • I’m ordering a pizza.

Entities are the nouns in the user examples or what the intents will act upon. In our example, the user wants to order a pizza, but there are different terms for pizza (e.g., zaw, pie, deep dish). One approach is to create more examples that use each of these terms so that the chatbot can learn each identified food that users can order. Alternatively, we can create an entity (@food) and add a value (pizza) with the synonyms we identified (pie, zaw, deep dish), and then replace pizza in our user examples with @pizza to represent all variations of the term the chatbot should take into consideration. Likewise, we could add values and synonyms for other types of food the user could order, such as salads, appetizers, and desserts, which would allow us to have a single intent for ordering food:

  • I want to order @food.
  • I’d like a @food.
  • I’ll take a @food.
  • I need a @food.
  • I’m ordering a @food.

Dialogue is what your users will ultimately see and interact with based on the AI’s interpretation of user goals. Given the user’s input, how should the chatbot respond when a specified pattern of intents and/or entities is recognized? Conversely, how should the chatbot respond when it does not understand the user’s input? For example, the chatbot could offer options that help move the conversation along rather than simply, “I don’t understand.”

Conversation Design: Considerations

There are a lot of decisions that need to be made and actions taken before a single word of dialogue is written:

  • Determine the chatbot’s purpose.
  • Conduct research to understand the domain.
  • Understand the user goals for interacting with the chatbot.
  • Identify the intents and entities.
  • Select the tone and voice of the chatbot.
  • Map the conversational branches in a flow.
  • Write dialogue.

Identifying the chatbot’s purpose is critical to understanding if a chatbot is the best solution to the understood problem.

Research is critical in chatbot design. Unless the conversation designer is a subject matter expert (SME), the designer will need to talk to the SMEs and evaluate any available information to determine what user goals the chatbot should handle versus a human agent. Research with users is necessary to understand their expectations for the chatbot. Determining what users want to do and how they might state their goals are key to building a successful dialogue interaction.

Before writing dialogue, consider the tone and voice of the chatbot, ensuring it is consistent with the existing brand and other available materials. For example, if your company sells pet toys, then having a happy dog personality for your chatbot may very well match your brand. But if the chatbot is for a bank, a dog-like chatbot might feel off or out of place.

If the conversation flow can include branching, it is helpful to map out the dialogue for an intent using a flow diagram so that the user does not accidentally end up in an unintentional dead end or loop with the chatbot.

Conversation Design: Best Practices

The goal of conversation design is to create an experience that feels natural while giving proper attention to grammar, spelling, and formatting to make the text easy to read:

  • Use contractions.
  • Avoid “yes” or “no” responses from the chatbot.
  • Move the conversation forward with each response.
  • Attempt to include the solution in the response.
  • Provide links to videos or more detailed textual explanations, as needed.
  • Give users buttons or text links to clarify options and reduce misunderstandings.
  • Update intents, entities, and dialogue to reflect the dynamic nature of content.

Interaction Design: Best Practices

In “Designing for AI—A UX Approach,” Marielle Lexow mentions that in addition to considering what the chatbot will say, conversation designers need to also consider the overall interaction with the user:

  • Set user expectations early.
  • Use a common language.
  • Enable users to have flexibility in interaction.
  • Design to create trust, transparency, and explanation.
  • Enable users to have control and provide feedback.
  • Test your chatbot to ensure it’s working as designed.

Chatbot Design: Real-World Examples

We incorporated many of these best practices in our designs of a voice-enabled chatbot for technical support and a text-based chatbot for employee-related questions.

One key to success is to set expectations regarding how the AI can assist and how the user can interact with it from the start of the interaction. In designing the voice-enabled chatbot, we communicated to the user from the start the specific technical issues it could address. It was also essential to explain early on how the user could interact with and navigate in the application by noting basic voice commands (e.g., main menu, repeat, and agent to escape the AI and speak to a human agent). By setting user expectations, we saw increased user satisfaction and positive feedback, plus fewer user errors.

A voice-enabled chatbot should also mimic human conversational speech patterns by pausing between sentences when it picks up speech and allowing the user to interrupt with a spoken command or intent. A successful conversation with a person or a machine is dependent on a common language that both parties understand. Sajid Saiyed, in “Design Considerations for Conversational UX,” notes that the voice-enabled chatbot should learn our language and understand our intents, not vice-versa. Unfortunately, in our voice-enabled chatbot, user utterances were not always correctly understood, which required some users to repeat their intents, eventually routing them back to the main menu, leading to increased frustration and decreased satisfaction.

When users have a higher expectation of AI applications than is warranted by the current level of technology, they often experience disappointment if their expectations go unrealized. Because AI systems use specific algorithms and models to analyze and interpret data autonomously, it is critical that users feel in control and develop a certain level of trust when using AI applications.

In the article “UX of AI,” Lennart Ziburski mentions that one way to facilitate trust in AI is to make users aware of how the AI system came to a decision or recommendation rather than acting as a black box experience. Users must be able to trust the provided answers and understand how they were determined. In addition, when users provide sensitive personal information (e.g., user IDs, passwords), they need to trust the AI to keep the information confidential. This trust extends to any navigational links provided by your AI.

In another chatbot created for employee-related questions, many internal links identified the information source, which helped users determine if they had already visited that link. Some of the links, however, were external to the company and were not recognized by users, making them feel less trusting of where the links would take them.

Chatbots are living applications that require ongoing maintenance to thrive and consistently provide a positive user experience. Part of this process is having the right tools available to gather user feedback, particularly for questions or intents where the AI was incorrect. Even better, the tool should automatically capture user behavior to determine which intents are working well and which ones are not. The chatbot application we designed did collect basic user sentiment about its responses in the form of thumbs up and thumbs down, but this required people to analyze the data and take manual corrective action. However, in a beta version of the chatbot, we collected a complex set of field metrics for each intent that the team could examine for potential improvements.

Allowing for personalization of the content by users provided them with a greater sense of control and let them ensure the answers and interaction met their needs. In the voice-enabled chatbot we developed for technical support, users had the option to allow the AI to intervene early on if it had a solution or wait until after they had proceeded through the menu options. Users could choose to have their answers delivered via a voice response on the phone or by text in an email. For the text-enabled chatbot, users could choose to either type their answer or use the provided links. Providing users with more personalization and control resulted in more positive feedback and greater satisfaction with the chatbots.

UX Considerations to Think About Before You Build

When many people think of chatbots, a 100% conversational interface comes to mind. The chatbot gives a greeting and asks what the user wants, then the user responds in a text format. Conversation then continues in turns back and forth until the user either gets an answer or gives up in frustration. Such an open-ended interaction assumes a mature chatbot with a well-defined knowledge corpus. If your chatbot, however, is still in the building stage or not meeting user expectations, you should consider some other approaches.

Consider adding menu-driven prompts. If your chatbot can only respond to a narrow set of topics, embrace a more closed-ended approach by leveraging menus in place of natural language. Your users should not have to guess what the chatbot is good for—spell it out. Your dependency on this approach can diminish as your knowledge corpus improves.

Leverage personalization. If someone signs into a chatbot, only offer options that apply to that user based on their account details. It is a frustrating experience when the user needs to supply the same information multiple times or information that the chatbot can access from their records.

Consider alternative solutions. While chatbots are ubiquitous and every business seems to want one, consider if a chatbot is the most appropriate interaction to meet your users’ goals and needs. Sometimes a search option or an FAQ is the best format instead of a chatbot. Users want the information quickly and may not be impressed with newfangled technology.

Chatbots perform better when

  • the domain area is limited in scope,
  • user goals are known and well-defined, and
  • machine learning or rule-based queries are used to improve the discoverability of high-value information.

If your chatbot does not meet these criteria, consider integrating your chatbot into your existing search experience. As noted in Temple et al.’s paper, “Not Your Average Chatbot: Using Cognitive Intercept to Improve Information Discovery,” cognitive intercept is a concept created at IBM whereby the chatbot runs in the background as the user searches in a standard-looking search box. If the chatbot has a high confidence match to the query, it displays its match in addition to the search results. Otherwise, it remains in the background, and the user continues with a standard search. This combined approach allows users to use a single, shared interface with lower cognitive overhead while tempering expectations for your chatbot. You can only make a first impression once.

At every step of the chatbot creation process, be curious. Ask if a decision adds value to the process and if it helps to meet the goals of the business or user. If the answer to any of these is no, then re-evaluate your approach.

Conclusion

Chatbots have found their way into our homes to help with home automation and into our daily lives as information devices that can answer questions or solve real problems. While machine learning will continue to improve all aspects of the chatbot experience, UX researchers and designers will continue to have critical roles in delivering an experience that anticipates user intents and responds in a conversational style that feels familiar.

Ace Up Your Sleeve: The Developer on Your UX Team

Whether your development practice is Agile, staged, or waterfall, and no matter what software domain you work in, one thing is true: the developers always have the last word. If they don’t write the code for your desired interface changes, then your changes don’t go into the software. In this sense, it doesn’t matter if you are “right” or if you can argue your point with mounds of test data—at some point, you have to convince a developer to make the change.

I hope we all work on well-functioning teams where we collaborate well with our developers, and where they want, as much as we do, to create usable, beautiful software that will help users succeed in whatever they are trying to do. But at some point in every release cycle, the team runs low on time and resources, and things start to get cut from the schedule. The first thing to get cut, usually, is the work that improves the usability of the software, especially in cases where you are doing “remedial” usability work on existing features.

Leftover usability work gets cut because of the way that development managers have to prioritize fixes. The first priority is crashes and problems that cause data loss, followed by bugs where things don’t work correctly. Of that class of bugs, problems where the users have a viable work-around to achieve their goals are pushed to the bottom—and that pretty much includes all usability problems.

This is an unwinnable argument. While it is true, and developers will agree, that you must have a certain amount of development time set aside for usability concerns, when it comes to bug-by-bug comparison, usability loses out. It is difficult to successfully argue that it is more important to re-layout a dialog to make it more understandable than it is to fix a crash. One can try to argue that an unusable feature is broken, but as long as doing the task is technically possible—no matter how difficult—it will be considered less important to development than a task that can’t be done at all.

This should be no surprise; developers and development managers are generally rewarded for producing code that is stable, functional, and on time. UX practitioners are rewarded for the usability and effectiveness of our designs, although, in the end, we don’t really control how they end up in the software.

Changing the Terms of Engagement

Having a developer on the UX team provides a different way to split development time between different needs. Instead of dividing up the work by time, you divide it up by people; the dedicated UX developer spends all of his or her time doing nothing except working on the usability agenda for a product.

Whether or not this strategy will be effective depends largely on the reporting structure of the project team. If the UX developer is a member of the main development team on loan to the UX agenda, it probably won’t work. As soon as the inevitable time crunch comes, this developer will be “borrowed back” to fix crashes and “real” bugs. Also, if the developer reports to the UX team members who report to the development manager, the same thing is almost certain to happen.

Our company division has a standalone UX team that acts on a consultancy basis to other projects, loaning out our interaction designers (and our dedicated developer) as needed. Since our developer is on a different reporting chain entirely, he or she cannot be annexed by the agenda of other projects when resources get tight. This is a critical success factor.

In our internal consultancy setup, when our developer is assigned to work with an interaction designer on a project, the development manager of that project is usually overjoyed at the prospect of having an additional developer, even one they don’t fully control. This is especially true if that developer isn’t charged to their budget.

The Kind of Developer You Need

Since your developer will be mostly working on user interface code, you don’t need someone with deep expertise in the domain field of your users or with particularly esoteric engineering skills. We find junior programmers fine for this work, because they are generally flexible and eager to acquire new skills, and will listen well to direction.

For years, our team has relied on co-op (intern) engineering and computer science students from a local university, for several reasons:

  • They are relatively inexpensive and, therefore, easier to argue for when setting a budget. Depending on your company, they may also come out of a completely different budget, since they don’t represent new permanent head count.
  • Although they rotate for four-month terms, they are available all year and are thus a full-time development resource.
  • Your commitment is only for one term at a time, so you can try this strategy once to see if it works for your team. If it works, you can then get interns sporadically as needed—although you do need to plan a few months ahead.
  • It’s a great way to scout upcoming talent you may want to hire full-time later on.
  • It exposes computer science and engineering students to the importance of usability and interaction design, which can only benefit all of us in the long run.

Deploying Your Developer

Now that you have your co-op/intern developer and a project you want to use him or her on, what is the best way to do it? The first and most important thing is to negotiate with the development manager to set the terms of involvement and support. Specifically:

  • If there are no senior developers on the UX team, the development manager will have to assign someone to act as mentor to your developer. This mentor will be responsible for helping your developer find his or her way around the code base, learn the local development/coding practices, debugging tools, and so on. While this represents some overhead, the advantages to the development manager are well worth it.
  • You need to clarify that you will be setting the agenda for this developer, and that this will be limited to things that affect usability and user experience. Of course, the development manager has final say over what gets into the code base, and quality standards for code, which your developer must live up to.
  • You should agree that your developer will fix any bugs that he or she introduces. But your developer will not be responsible for fixing every UI bug, just because it’s in the UI. (You may, however, assign particular UI bugs to your developer in cases where they have a negative impact on the user experience and are unlikely to be fixed otherwise.)
  • Your developer will also not be responsible for writing all the UI for all the new features. That should be the responsibility of the programmer coding each feature. (Later, when the time crunch comes around and key UI features are not being finished, you can get your developer to finish the key areas. But don’t announce up-front that you’ll do that, or the other coders will pay no attention to UI whatsoever.)

Your developer will also need appropriate hardware and a desk to sit at. The best way to budget or obtain these items will, of course, be particular to your workplace. If possible, have your developer sit with your interaction designers and not with the developers. This may seem to go against current ideas of co-location, but in our experience it is wiser. People pick up information and values environmentally, so if your developer is located with the interaction designers, he or she will be overhearing discussions about the user experience problems and how to improve on them. Located with the developers, he or she will be immersed in that milieu, and might “go native.” You want to ensure that your developer feels a part of your team.

If your developer is not near you, other members of the development team may start slipping additional work to your developer under the table. Junior developers may not know how to say no to such requests, and you may not realize it is even happening until you have lost a lot of time. Sitting close makes it more likely you will catch these situations.

Ways to make sure your developer feels a part of the UX effort include inviting him or her to sit in on some of the testing, making sure you explain the rationale of changes you ask him or her to do, and praising the work when it does measurably improve the interaction. Even if it was your design, the developer made it happen and should feel an equal stake in the success.

The Kind of Work to Give Your Developer

The best strategy is to give your developer high value UX work that is least likely to be done by other developers. For example, if you have designed the UI for a new feature that is on the development plan, you can be fairly certain that the development team is going to build it whether or not you loan them your developer. So this is not a good job for your developer. Let the regular development team build it.

On the other hand, say you have performed a usability test on existing shipped software, and you come up with a list of forty small label and interaction detail changes that could really enhance the overall usability. Even though these are all small, easy, low-risk fixes, chances are that they will be pushed down the priority list below new features, then below crashes and other bugs. (Then, when the time crunch comes, they will be pushed off to the next version.) These kinds of changes make excellent jobs for your developer, who can complete a large number of them in a relatively short time. The development manager will also be glad to get a bunch of small, pesky bugs off the table.

Sometimes there are features that development is willing to put in, but only in the most minimal form. You can have your developer finish those properly. (Again, don’t pre-announce that you will be doing so. Assign that work after the feature is in.)

Outside of production coding, you can use your developer to help you create code prototypes for usability testing in those circumstances when low-fidelity prototyping is not feasible (for example, for highly-interactive interfaces where the visceral feel of the interaction is paramount). This is a key benefit of having a UX team developer, especially if you practice formative usability testing. Without disturbing the development team at all, you can try a number of different interactions, and then pass the prototype code onto the development as a specification.通过给您的 UX 团队增加一名开发人员,您可以改善合作条件,一并绕过调度和安排工作先后问题。这样不必按时间来划分工作(先来排除程序中的错误,然后再纠正可用性问题),而可以按人来分工作(Joe 排错,同时 Maureen 纠正可用性问题),从而保证对可用性问题的关注。这篇文章详细介绍了这一构思,并为任何想要尝试此方法的人提供了具体的指导和建议。

文章全文为英文版당신의 UX 팀에 개발자를 추가하면 업무 조건을 변화하고, 일정과 우선순위 문제를 함께 피할 수 있습니다. 업무를 순차적으로 분류(버그를 우선, 사용성 수정은 나중에)하는 대신, 근무자가 동시에 일을 추진할 수 있음으로 해서(조가 버그를 조정하는 동안 모린이 사용성 문제를 조정함) 사용성 문제에 관한 관심이 특정한 수준을 유지할 수 있게 보장합니다. 본 논문은 이 아이디어를 확장하며 특정 지침을 제공하고 이 접근법을 시도하고자 하는 자에게 조언을 제공합니다.

The full article is available only in English.Ao incluir um desenvolvedor em sua equipe de experiência do usuário, você pode alterar as formas de envolvimento e ultrapassar problemas de cronograma e prioridades ao mesmo tempo. Em vez de dividir o trabalho por tempo (bugs primeiro, resolução de problemas de usabilidade depois), o trabalho pode ser dividido por pessoa (Joe resolverá os bugs enquanto Maureen resolverá os problemas de usabilidade), assegurando um determinado nível de atenção aos problemas de usabilidade. Este artigo desenvolve essa ideia e fornece orientação e conselho específicos para todos que queiram tentar essa abordagem.

O artigo completo está disponível somente em inglês.UXチームに開発者を加えると、調査への取り組みの条件を変えることができ、スケジュールや優先順位の問題を一括して避けられる。(まずバグを修正して、後でユーザビリティを修正する、といったように)作業を時間の流れで分けようとせず、ユーザビリティの問題にある程度の注意が払われるようにするため、関係者ごとに作業を分けるのが良い(例えばモウリーンがユーザビリティの問題を修正している最中に、ジョーはバグを修正する、といった具合)。この記事では、このアイデアを拡張し、このアプローチを取り入れたい人に向けた具体的なガイダンスやアドバイスを提供する。

原文は英語だけになりますMediante la incorporación de un desarrollador a su equipo de UX, usted puede cambiar las reglas de participación del equipo y  sortear los problemas de la programación de tareas y las prioridades al mismo tiempo. En lugar de dividir el trabajo en función del tiempo (primero los errores críticos, luego las correcciones de usabilidad), el trabajo se puede dividir por persona (Joe va a solucionar los errores críticos mientras Maureen soluciona los problemas de usabilidad), con lo cual se garantiza un cierto nivel de atención a las cuestiones relacionadas con la usabilidad. En este artículo se amplía esta idea y se brinda orientación y consejos específicos para todos aquellos que quieran poner en práctica este enfoque.

La versión completa de este artículo está sólo disponible en inglés.

Hold the Phone: A Primer on Remote Mobile Usability Testing

In recent years, remote usability testing of user interactions has flourished. The ability to run tests from a distance has undoubtedly broadened the horizons of many a UXer and strengthened the design of many interfaces. Even though mobile devices continue to proliferate, testing mobile interactions remotely has only recently become technologically possible. We took a closer look at several of the tools and methods currently available for remote mobile testing and put them to the test in a real world usability study. This article discusses our findings and recommendations for practitioners conducting similar tests.

History of Remote Usability Testing

Moderated remote usability testing consists of a usability evaluation where researchers and participants are located in two different geographical areas. The first remote usability evaluations of computer applications and websites were conducted in the late 1990s. These studies were almost exclusively automated and were neither moderated nor observed in real-time. Qualitative remote user testing was also conducted, but the research was asynchronous—users were prompted with pre-formulated questionnaires and researchers reviewed their responses afterward.

Remote user research has come a long way since this time. Researchers can use today’s internet to communicate with participants in a richer and more flexible way than ever before. Web conferencing software and screen sharing tools have made initiating a moderated remote test on a PC as simple as sharing a link.

Pros and Cons of Remote Testing

In deciding whether a remote usability test is right for a particular project, researchers must consider the benefits the methodology affords as well as the drawbacks. Table 1 details this comparison.

Table 1 Benefits and drawbacks of remote usability testing

Benefits of Remote Testing

Drawbacks of Remote Testing

Enhanced Research Validity

+ Improved ecological validity (e.g. user’s own device)

+ More naturalistic environment; real-world use case

Reduction in Quality of Data

– Inherent latency in participant/moderator interactions

– Difficult to control testing environment (distractions)

Lower Cost & Increased Efficiency

+ Less travel and fewer travel-related expenses

+ Decreased need for lab and/or equipment rental

Expanded Spectrum of Technical Issues

– Increased reliance on quality of Internet connection

– Greater exposure to hardware variability

Greater Convenience

+ Ability to conduct global research from one location

+ No participant travel to and from the lab

Diminished Participant-Researcher Interaction

– Restricted view of participant body language

– Sometimes difficult to establish rapport

Expanded Recruitment Capability

+ Increased access to diverse participant sample

+ Decreased costs may allow for more participants

Reduced Scope of Research

– Typically limited to software testing

– Shorter recommended session duration

Remote Usability Testing with Mobile Devices

With mobile experiences increasingly dominating the UX field, it seems natural that UX researchers would want to expand their remote usability testing capabilities to mobile devices. However, many of the mobile versions of the tools commonly used in desktop remote testing (for example, GoToMeeting and WebEx) don’t support screen sharing on mobile devices. Similar tools designed specifically for mobile platforms just haven’t been available until fairly recently.

As a result, researchers have traditionally been forced to shoehorn remote functionality into their mobile test protocols. Options were limited to impromptu methods such as resizing a desktop browser to mobile dimensions, or implementing the “laptop hug” technique where users are asked to turn their laptop around and use the built-in web cam to capture their interactions with a mobile device for the researcher to observe.

Unique Challenges of Testing on Mobile Devices

In addition to the limitations of common remote usability testing tools, other unique challenges are inherent in tests with mobile devices. First, operating systems vary widely—and change rapidly—among the mobile devices on the market. Second, the tactile interaction with mobile devices cannot be tracked and captured as readily as long-established mouse and keyboard interactions. Third, mobile devices are, by their nature, wireless, meaning reduced speed and reliability when transferring data. Due to the unique challenges of testing mobile devices, the tools currently available on the market still struggle to meet all the needs of remote mobile usability tests.

Overview of the Tools

In many moderated remote testing scenarios focusing on desktop and laptop PCs, researchers can easily view a live video stream of the participant’s computer screen or conversely, the remotely located participant can control the researcher’s PC from afar. Until recently, neither scenario was possible for testing focused on mobile devices.

In the last decade, improvements in both portable processing architectures and wireless networking protocols have paved the way for consumer-grade mobile screen streaming. As a result, researchers are beginning to gain a means of conducting remote mobile user testing accompanied by the same rich visuals they’ve grown used to on PCs.

Tool configurations

At present, moderated remote testing on mobile devices can be accomplished in a number of ways. These methods represent a variety of software and hardware configurations and are characterized by varying degrees of complexity. Figure 1 depicts four of the most common remote mobile software configurations that exist today.

A diagram depicting four researcher-participant software configurations for accomplishing remote mobile user testing. See text for detailed explanation.
Figure 1. Four configurations for remote mobile testing

  • Configuration A: First, the participant installs one tool on both their mobile device and computer. This enables them to mirror their mobile screen onto their PC. Then, both the participant and the researcher install one web conferencing tool on each of their PCs. This enables the researcher to see the participant’s mirrored mobile screen shared from the participant’s PC. Example: Mirroring 360.
  • Configuration B: First, the participant installs one tool on their PC. The native screen mirroring technology on their mobile device (for example, AirPlay or Google Cast) works with the tool on their PC so they do not need to install an app on their phone. Then, both the participant and the researcher install one web conferencing tool on each of their PCs. This enables the researcher to see the participant’s mirrored mobile screen shared from the participant’s PC. Examples include: Reflector 2, Air Server, X-Mirage
  • Configuration C: Both the participant and the researcher install one web conferencing tool on each of their PCs. The native screen mirroring technology on the participant’s mobile device (for example, AirPlay or Google Cast) works with the tool on their PC so they do not need to install an app on their phone. In addition, this tool enables the researcher to see the participant’s mirrored mobile screen shared from the participant’s PC. Example: Zoom
  • Configuration D: The participant installs one tool on their mobile device. The researcher installs the same tool on their PC. The tool enables the participant’s mobile screen to be shared directly from their mobile device to the researcher’s PC via the Internet. Examples include: me, Mobizen, TeamViewer, GoToAssist

As researchers, we typically want to make life easy for our test participants. Here we would do that by minimizing the number of downloads and installations for the participant. As a result, having a single instance on the remotely located participant’s end (configurations C and D) are clearly preferable over multiple such installations (configurations A and B). While configuration C does not require the participant to download an app on their mobile device, it does require them to have a computer handy during the session. Configuration D, on the other hand, does not require the participant to use a computer, but requires them to download an app on their mobile device.

Characteristics of the ideal tool

There are many tools that claim to support features that might aid in remote mobile testing. Unfortunately, when evaluated, most of these applications either did not function as described, or functioned in a way that was not helpful for our remote testing purposes.

As we began to sift through the array of software applications available in app stores and on the web, we quickly realized that we needed to come up with a set of criteria to assess the options. The table below summarizes our take on the ideal characteristics of remote usability testing tools for mobile devices

Table 2: Descriptions of the ideal characteristics

Characteristic Description
Low cost ·       The cheaper the better if all else is comparable
Easy to use ·       Simple to install on participant’s device(s), easy to remove

·       Painless for participants to set up and use; not intimidating

·       Quick and simple to initiate so as to minimize time spent on non-research activities

·      Allows for remote mobile mirroring without a local computer as an intermediary

High performing ·       Minimal lag time between participant action and moderator perception

·       Can run alongside other applications without impacting experience/performance

·       Precise, one-to-one mobile screen mirroring, streaming, and capture

·       Accurate representation of participants’ actions and gestures

Feature-rich ·       Ability to carry out other vital aspects of research in addition to screen sharing (for example, web conferencing, in-app communication, recording)

·       Platform agnostic: fully functional on all major mobile platforms, particularly iOS and Android.

·       Allows participants to make phone calls while mirroring screen

·       Protects participant privacy:

o   Allows participants to remotely control researcher’s mobile device via their own

OR, if participant must share their own screen:

o   Considers participant privacy by clearly warning when mirroring begins

o   Provides participant with complete control over start and stop of screen sharing

o   Snoozes device notifications while sharing screen

o   Shares only one application rather than mirroring the whole screen

How we evaluated the tools

Of the numerous tools we uncovered during our market survey, we identified six which represented all four of the configuration types and also embodied at least some of the aforementioned ideal characteristics. Based on in-house trial runs, we subjectively rated these six tools across five categories to more easily compare and contrast their strengths and weaknesses. The five categories that encompassed our ideal characteristics included:

  1. Affordability
  2. Participant ease-of-use
  3. Moderator convenience
  4. Performance and reliability
  5. Range of features

The rating scale was from 1 to 10, where 1 was the least favorable and 10 was the most favorable. The spider charts below display the results of our evaluation. As the colored fill (for instance, the “web”) expands outward toward each category name, it indicates a more favorable rating for that characteristic. In other words, a tool rated 10 in all categories would have a web that fills the entire graph.

We are not affiliated with any of these tools or their developers, nor are we endorsing any of them. The summaries of each tool were accurate when this research was done in early 2016. 

Mobizen and Team Viewer each had strengths in affordability and ease of use, respectively. However, Join.me and Zoom notably fared the best on the five dimensions overall.

Having done this analysis, when it came time to conduct an actual study, we had the information we needed to select the right tool.

Case Study with a Federal Government Client

Ultimately, we can test tools until we run out of tools to test (trust us, we have). However, we also wanted to determine how they actually work with real participants, real project requirements, and real prototypes to produce real data. We had the opportunity to run a remote mobile usability test with a federal government client to further validate our findings.

Our client was interested in testing an early prototype of their newly redesigned responsive website. As a federal government site, it was important to include a mix of participants from geographies across the U.S. The participants also had a specialized skill set, meaning recruiting would be a challenge. As a result, we proposed using these newly researched tools to remotely capture feedback and observe natural interactions on the mobile version of the prototype.

We chose two tools to conduct the study: Zoom (for participants with iOS devices) and Join.me (for participants with Android devices). We chose these tools because, as demonstrated by our tool analysis, they met our needs and were the most reliable and robust of the tools we tested for each platform.

To minimize the possibility of technical difficulties during the study itself, we walked participants through the installation of the tools and demonstrated the process to them in the days prior to the session. This time allowed us to address any issues with firewalls and network permissions that are bound to come up when working with web conferencing tools.

Using this method, we successfully recruited seven participants to test the mobile version of the prototype (as well as eight participants to test the desktop version). We pre-tested the setup with three participants whom we ultimately had to transfer to the desktop testing group due to technical issues with their mobile devices. Dealing with these issues and changes during the week prior to the study ensured that the actual data collection went smoothly.

Lessons Learned About Remote Mobile Testing

Not surprisingly, we learned a lot from this first real world usability study using these methods and tools.

  • Planning ahead is key. Testing the software setup with the participants prior to their scheduled session alleviated a great deal of stress during an already stressful few days of data collection. For example, our experience was that AirPlay does not work on enterprise networks. We were able to address this issue well in advance of the study.
  • Practice makes perfect. Becoming intimately familiar with the tools to be used during the session allows you to more easily troubleshoot any issues that may arise. In particular, becoming familiar with what the participant sees on their end can be useful.
  • Always have a backup. When the technical issues arise, it’s always good to have a backup. We knew that if the phone screen sharing didn’t work during the session, we could quickly relegate our testing method to one of the less optimal, but still valid mobile testing methods, such as re-sizing the browser to a mobile device-sized screen. If Zoom or Join.me didn’t work at all, we knew we could revert to our more reliable and commonly used tool for sharing desktops remotely, GoToMeeting. Fortunately, we didn’t need to use either of these options in our study.
  • Put participants at ease. Give participants a verbal overview of the process and walk them through it on the phone, rather than sending them a list of complex steps for them to complete on their own.
  • Tailor recruiting. By limiting recruiting to either iOS or Android (not both), you will only need to support one screen sharing tool. In addition, recruit participants who already possess basic mobile device interaction skills, such as being able to switch from one app to another. These tech savvy participants may be more representative of the types of users who would be using the product you are testing.

The Future of Remote Testing of Mobile Devices

While we’re optimistic about the future of remote mobile usability testing, there is certainly room for improvement in the tools currently available. Many of the tools mentioned in our analysis are relatively new, and most were not developed specifically for use in user testing. As such, these technologies have a long way to go before they meet the specifications of our “ideal tool.”

To our knowledge, certain characteristics have yet to be fulfilled by any tool on the market. In particular, we have yet to find an adequate means of allowing a participant to control the researcher’s mobile device from their own mobile device, nor a tool that screen shares only a single app on the user’s phone or tablet, rather than the whole screen. Finally, and perhaps most importantly, we have yet to find a tool that works reliably with both Android and iOS, not to mention other platforms.

Nevertheless, mobile devices certainly aren’t going anywhere and demand for better mobile experiences will only increase. As technology improves and the need for more robust tools is recognized, it’s our belief that testing mobile devices will only get easier.

 

Author’s Note: The information contained in this article reflects the state of the tools as reviewed when this article was written. Since that time, the technologies presented have evolved, and will likely continue to do so.

Of particular note, one of the tools discussed, Zoom, has added new mirroring capabilities for Android devices. Although a Zoom app for Android was available when we reviewed the tools in early 2016, screen mirroring from Android devices was not supported. Therefore, this functionality is not reflected in its ratings.

We urge readers to be conscious of the rapidly changing state of modern technologies, and to be aware of the potential for new developments in all of the tools discussed.

 

 

[bluebox]

More Reading

A Brief History of Usability by Jeff Sauro, MeasuringU, February 11, 2013

An empirical comparison of lab and remote usability testing of web sites, Tom Tullis, Stan Fleischman, Michelle McNulty, Carrie Cianchette, and Marguerite Bergel. Usability Professionals Association Conference, July 2002

Laptop Hugging, Ethnio Blog, October 29, 2011

Remote Evaluation: The Network as an Extension of the Usability Laboratory, H. Rex Hartson, Jose C. Castillo, John Kelso, Wayne C. Neale, CHI 1996, 228–235

Internet communication and qualitative research: A handbook for researching online by Chris Mann and Fiona Stewart ,Sage, 2000.

[/bluebox]