Skip to content Skip to footer

Search Results for: interaction – Page 4

MESS Days: Working with Children to Design and Deliver Worthwhile Mobile ExperiencesMESS (Mad Evaluation Sessions with Schoolchildren)

If designing usable products for people is hard, designing products for children is even harder. Children have a more difficult time conveying what they want, we have a harder time working out the important factors of what they say, and they (especially younger children) tend to be rather non-critical. “Did you like it?” the developers invariably ask the children. “Yes. It was great!” is the inevitable reply.

The Child Computer Interaction Group at UCLan in Preston, UK, is a dedicated research group that concentrates on the specific area of usability and user experience design for children’s technologies. Formed in 2002, the group carries out research and development projects that focus on the creation and understanding of interactive products for children aged three to sixteen. Our work is collaborative, working with researchers across the globe, but also, and more interestingly, working with children and their teachers in immersive design and evaluation activities.

MESS Days

In several projects we have used our own pioneering approach with children, called MESS (Mad Evaluation Sessions with Schoolchildren) days. MESS days are events where a whole school class of children take part in a series of activities that typically include evaluations of products, design sessions, small research studies, and activities that are purely for fun. What is unique about MESS days is that we occupy a whole classroom of children for anything between an hour and all day, depending on the work being done. The MESS day epitomizes our approach to research and interaction design with children—that it will be messy, that it should be inclusive, that it should be fun, and that it should be fast paced and constantly refreshing.

The UMSIC Project

Recently we have used the MESS day approach in the EU-funded UMSIC project. The UMSIC (Usability of Music for Social Inclusion of Children) project plans to deliver novel mobile music-making applications that can be used collaboratively by children—especially those who are new immigrants in a European country or who have attentional difficulties. In the UMSIC project, the development of the technology is far removed from the design, both in terms of location—UK and Northern Finland—but also in terms of context—designers used to working with children, developers used to working with Java.

Design: Obstructed Theater

In the early stages of the UMSIC project, we held MESS days to gather ideas for designs for mobile products as well as to test out new concepts and new interaction techniques. For example, in one session, around half the children spent their time designing mobile music products using paper and cards, pipe cleaners, glue, and other prototyping items (see Figure 1). At the same time, the other children in the class tried out and commented on existing mobile music products including iPhone apps, Nintendo DS games, and PC applications.

a child's design
Figure 1. A design from the first series of MESS days.

To initiate the design session, we employed a method that we call “Obstructed Theatre.” Obstructed Theatre is a modification of a method first used at Newcastle University in design work with adults; in their variation, two professional actors talked about a technology product that was hidden from view. In our own version of this technique, we adapted it for children by having two twelve year olds videotape a short sketch of a situation in which the mobile device would be used. Keeping the item hidden, we then used this short video to kick off the design session. This method allowed us to convey the key requirements for the product to be designed without giving anything away about how it should look.

Evaluation: The Fun Toolkit

In our evaluations of competing technologies, we were interested in the way children used them, the features they tended to use, and the ways in which they interacted with the products. For this we mainly used observations and kept notes of interesting things. However, we also used the Fun Toolkit, a set of tools we specifically designed to help us overcome the “Yes, it was great,” syndrome that is known to occur in user experience evaluations with children. The Fun Toolkit includes three simple tools, the Smileyometer (see Figure 2), Fun Sorter (see Figure 3), and Again Again table (see Figure 4). By combining the results obtained with these tools, it is possible to determine which products and features the children prefer.

smiley faces
Figure 2. The Smileyometer from the Fun Toolkit. Children check one of the faces but sometimes they put multiple checkmarks on the Brilliant face because the scale just isn’t great enough!
images sorted by
Figure 3. The Fun Sorter from the Fun Toolkit. When children fill these in, we tend to give them photos of the things they have done so that all they have to do is stack them in the appropriate places.
Images sorted by
Figure 4. The Again Again table from the Fun Toolkit. Children were asked to indicate whether or not they would like to play these games again.

Paper Prototyping

As the project progressed, we drilled down into the design space that surrounded the specifics of the product that was being designed—the JamMo. All along we knew that this would be a mobile application to be built on a Nokia touch screen device. Having gotten some general ideas from children relating to the arena of music making, we used our second series of MESS days to get a better understanding of interactivity in the mobile context.

In these MESS days, we again engaged the children in different activities. In a design-focused activity, we looked at the interface designs of the JamMo and asked children to create a set of “screens” and then position them on a “mockup” device (see Figure 5).

Just as you would with adult participants, we used this “paper prototype” to test that the interactions and flow of visuals and sounds would succeed once built. Since studies with children and touch screen interactions are rare, we also included a MESS day activity to check out the optimal sizes for interactive items on the small screens.

Children: Not Simply
 Smaller Users

As mentioned earlier, one of the interesting aspects of the UMSIC project was that the developers and designers were geographically and contextually separate. This created some problems conveying requirements, design ideas, interaction rules, and ideas for new concepts. It is not straightforward to take something that has been created or suggested by a child and make it understandable or relevant to a programmer. All too often the connection from child-centered design to child-centered product is lost in translation.

In the UMSIC product, by virtue of persistent communication and by communicating ideas in drawings rather than text, a good number of the design features requested from the design team made their way into the final product. A turning point in the design space was when the software programming team came face-to-face with children users for a prototype product exercise in the UK. More than any of our other communications, this event really made the users come alive for the development team. They came to realize that the children were not simply smaller users.

Including Children
in Real Projects

Our research group has spent considerable time studying the usefulness and the usability of MESS days in the product design process. We have come to realize that their use varies according to the context of the work. In real software development projects, MESS days need to be structured around the development teams. In addition, products from the MESS days need to be carefully translated and made relevant to the developers. This model, including activities for children, is shown in Figure 6.

chart
Figure 6. A model of child-centered product development.

This model can be broken into three phases:

Phase 1

Hold off designing anything until the first round of requirements gathering with children has taken place; use Obstructed Theater so that your ideas don’t overly influence the children.

Check out competing products and discover which of them gives the best user experience using observations, the Fun Toolkit, and other approaches.

Create a first design as a paper/lo-fidelity prototype.

Phase 2

Identify any interaction problems and test them out. Ensure this testing is done in such a way that it can be generalized.

Check out the logic of the design; walk through paper- or screen-based prototypes with real children.

Create a functional prototype.

Try out the prototype with children in the company of the development team. This will have a lasting effect on the programmers. They will start to understand, and better listen to, the design team.

Phase 3

Engage children in adding their own uniqueness to the prototypes (for example, making icons, adding music, and titles). This gives ownership to the child design team.

Evaluate this prototype against its competitors and make plans to fix anything that makes it especially poor in comparison.

Plan for version two by giving children a chance to suggest improvements. This needs to be done before the product goes to market since once it is adopted, no child will ever be critical of it again.

Build the final version.

Designing with children adds a new dimension to user experience design and usability testing. Challenges still exist in finding appropriate ways to convey design requirements and design ideas across the divide from designer to developer; there is still work to be done in understanding how children can best contribute across this divide.在中央兰开夏大学 (University of Central Lancashire) 的儿童计算机交互小组中,Janet C. Read 博士和同事们与学校所有班级的儿童一起开展了一项儿童友好的以用户为中心的设计。研究人员非常清楚儿童通常对未使用过的产品非常好奇,因此他们采用了可诱导需求的高度交互的方法。他们使用从戏剧和教育实践中获得的技巧,利用儿童的活力和创造力来以创新的方式捕捉想法和评论。这篇文章讨论最近一个项目所使用的方法和收获的重要心得,在该项目中,儿童帮助构思、起草和开发移动音乐应用原型。这篇文章以一个流程作为结尾,有助于其他设计团队在与儿童合作一起设计时取得成功。

文章全文为英文版セントラル・ランカシャー大学(University of Central Lancashire)のチャイルドコンピュータインタラクショングループでは、ジャネットC.リード博士(Dr. Janet C. Read)と同僚が、ある学校の全学級の子ども達とともに、子どもにも易しい、ユーザの側に立ったデザインについて研究を行った。研究者たちはみな、子ども達というものは使いもしない製品にさえ大きな興味を示すという傾向について十分承知しており、必要条件を引き出すために、極めてインタラクティブな方法を開発した。シアターや教育の現場で使われるテクニックで、子どもたちのエネルギーや創造性を利用し、革新的な方法で子どもからのアイデアや評価を得るものだ。この記事では、そのやり方と、最近行われたプロジェクトの中で、携帯用ミュージックアプリケーションの構想やスケッチ、プロトタイプ化を子どもたちが手伝った際の主な結果を議論し、子どもと一緒になってデザインをする際に、デザインチームの成功を確かなものにする手助けとなるプロセスについての説明で締めくくっている。

原文は英語だけになります

Capturing User Requirements in Health and Social Care: Applying UML for unambiguous communication

Users rarely have any input into technical standards. “User representatives” are more often technologists or policy specialists than the people who will use products created with the standard. This is especially true in healthcare technology. Doctors, nurses, care attendants, and various specialists are rarely IT-savvy, and are seen as “too valuable” to participate in detailed standards work. Creating and approving a new standard is a slow process. Mistakes in early user and task analysis may not show up until after the standard is approved, sometimes five years later. Any usability professional can predict the result: systems built on mistaken assumptions about workflow and information needs that are simply not usable.

To its credit, throughout the long standards-making process, HL7, one of the largest healthcare standards organizations, has created a methodology that incorporates input from real healthcare providers. I learned about HL7 “storyboards” from a nurse who showed me some of the storyboards, explaining how exciting it was to really understand the material. She introduced me to Isobel Frean, who shares with us how one project in Australia used the HL7 methodology to involve healthcare providers in documenting requirements for aged-care facilities.

—Whitney Quesenbery, special guest editor

In countries like the UK, Australia, and the USA, providers of healthcare to older people, including home healthcare, respite care, and long-term care workers, play a key role. Healthcare reforms have started to address the growing number and distinct types of demands that older people make on health and welfare systems. Goals to create effective care depend on the success of advances in information and communication technology (ICT), particularly in the ability of systems to work together.

Unfortunately, aging services providers have had little involvement in developing requirements and standards that inform national or regional e-health investment in social care, e-health infrastructure, or electronic client record systems.

Even in national initiatives such as the UK’s National Health Service (NHS) National Programme for IT (NPfIT), there has been no real commitment at this stage by either government or the private sector to the development of standards to meet the communication needs of aging services providers. The focus has tended to be on hospital and physicians. This leads to incorrect assumptions; for example, that the needs of providers in residential facilities and private homes are similar to those in larger institutions, such as hospitals.

In 2001, three care provider organizations partnered with the University of Wollongong, in New South Wales, Australia, to support a requirements-gathering project funded by the Australian Research Council (ARC). They wanted to ensure that the electronic communication needs of providers were explicitly captured and not assumed by developers of the national health IT system.

Our Australian project used narrative stories and standardized modeling techniques to involve aging services providers in a systematic approach to requirements. The approach ensured that the documentation requirements of health and social care workers were obtained in ways meaningful to them, while also providing the information needed by designers and developers of healthcare and medical computer systems.

Gathering Healthcare Requirements

Many failures of health ICT projects arise from a failure to capture the needs of the end user. Over the past two decades, health information specialists have placed increased emphasis on standardizing methods for capturing user requirements. These methods have their foundation in information modeling.

Unified Modeling Language (UML) has emerged as the leading method for modeling healthcare information. UML uses unambiguous picture-based symbols and labels. It is relatively accessible to the novice reader. The international healthcare messaging standards organization, Health Level 7 (HL7), has also adopted the use of UML because it provides a common language and method for representing complex healthcare concepts.

The HL7 V3 methodology includes UML in the requirements documentation and analysis phase. Application of a formal methodology ensures that requirements documentation truly reflects the needs of all domain users.

Use Cases Capture High-Level Communication Needs

We solicited the communication needs of approximately eighty domain experts from three aging services providers and similar organizations for one year. We employed a “ Delphi” approach—using three rounds of focus groups and questionnaires to reach a consensus of opinion among the domain experts. Experts included clinical decision-makers ( nurses, doctors, personal assistants, therapists, etc.), as well as administrative and financial personnel working in admissions, hospitality, and accounts.

The Delphi approach allows multiple experts dispersed over a large geographical area to reach consensus on a new area of understanding—in this case, business requirements for information exchange in healthcare for seniors.

The first round of consultations involved interviews with personnel from each of the business processes involved in the provision of aging services, including care delivery, management, and hospitality. We also interviewed individuals from outside the organization who interact with internal personnel (doctors, hospital discharge planners, etc.).

The interviews captured high-level communication needs as use cases. Each use case described a communication between an actor (human or device) interacting with a system of interest (for example, a medication management system) to receive a product of value from the system.

diagram of healthcare providers in medication management system
Use case diagram for a nursing home medication management system. Actors are the nurse (RN) responsible for medication administration and documentation, the doctor (GP) responsible for prescribing and reviewing medications, and the consumer (resident) as the beneficiary of the medication management activities. The product of value is the provision of care.

Use cases were a new concept to domain experts, so we used the term ”conversation” instead to elicit discussion about the most frequent, time-consuming, and inefficient interactions with external parties. Since domain experts could readily describe the conversations and interactions they have with others, this terminology made them more comfortable with the process.

Following the first round of interviews, we created and presented sixty-six use cases to the domain users for validation. Once they understood how the use cases represented the conversations they had described, they provided feedback enthusiastically.

After review and refinement by domain experts, the initial sixty-six use cases were rationalized down to fifty-five. These represented a core set of business requirements for the exchange of information between aging services providers and other stakeholders.

Storyboards Capture Dynamic Elements in Communication

After creating the high-level use cases, the next step in the HL7 methodology is to understand the details of the interactions between the actors in each use case. This level of understanding is critical for informing the functional requirements that will be used by technical designers to build the eventual electronic solutions. Each unique information flow between clearly defined actors must be clearly described. The method for documenting this is the simple tool called a storyboard.

A storyboard documents a series of interactions in a given conversation in narrative format. The storyboard is the key mechanism used in the HL7 methodology to explain the purpose of the healthcare communication activity, the pre-conditions, the actions or interactions that take place, and outcomes or post-conditions.

Storyboard example (see box at the end of the article for the text)
Format for for presentation of storyboards employed in the HL7 Care Provision ballot (May, 2005) used to documents Australian aged care requirements.

The power of the storyboard is that domain experts can use real life scenarios to break down complex information flows by describing the content and sequence of the flow between the participating actors. If the scenario results in comments from domain users like, ”That’s not the way it happens in real life,” then it is back to the drawing board.

HL7 V3 storyboards also use UML diagrams within the storyboard to illustrate the static and dynamic elements of the conversation. These diagrams enhance the precision of the storyboard and frequently provoke domain experts to identify additional information flows. These additional flows usually address the many different contexts in which care is delivered. For example, “Securing a place in a nursing home,” proved to be one of the most complex storyboards; it involved multiple decision points for the care needs and eligibility of the client.

The first diagram used in a storyboard is typically an activity diagram. This helps visualize the activities and flow of a healthcare business process and clarifies and expands the content of the storyboard.

flowchart
UML activity diagram illustrating a request by an RN to a GP for a new medication order.

To show a message flow sequence, the storyboard uses UML interaction or sequence diagrams, which represent objects and their relationships to one another.

For each use case in the study, the team developed one or more storyboards, producing a total of eighty-two storyboards. The scope of the storyboards confirmed the need to develop ICT solutions to support business processes in each of four discrete areas: accessing services, clinician liaison, coordination of care delivery, and account management and claims.

Good Requirements Documentation is Validated by Users

The ARC project used a Delphi approach with additional groups of domain experts from another part of the country to validate the storyboards. During successive rounds of consultations, we made changes to the storyboards until there was consensus on their accuracy. The Delphi approach was instrumental in capturing what is possibly the most comprehensive set of aging healthcare requirements documented in Australia, and possibly worldwide.

The breadth and depth of domain expert involvement in the project set it apart from previous healthcare requirements-gathering projects, not least because the process was driven by the end users themselves, rather than by system developers representing what they believed to be the needs of end users. Most importantly, the detail contained in the storyboards provides a starting point for senior healthcare organizations to better understand their existing work processes and how to adapt them for technology solutions that support the delivery of healthcare services.

[greybox]

Storyboard: Request Waiting List Status Report

Purpose: This storyboard demonstrates the flow of communications associated with querying the status of a consumer’s positioning on a waiting list maintained by an individual or a regionally managed waiting list

Precondition: Peter Process, Hospital Discharge Social Worker at Good Health Hospital, has previously sent requests to several nursing homes for a bed for inpatient Mr. Adam Everyman.  He has been advised by each of these that Mr. Everyman has been placed on their waiting lists. As Mr. Everyman is keen to go to one of the nursing homes close to his family, he has his name on the lists for, respectively,  Living Legends Aged Services (LLAS) and Seniors Living Retirement Villages (SLRS). Peter Process is keen to place Mr. Everyman in the next 24-48 hours and wishes to establish the status of application to determine whether he needs to approach other nursing homes.

Storyboard narrative:  As he is authorized to access both the LLAS and SLRV waiting lists, Peter Process requests a status report on where in each waiting list Mr. Everyman is positioned to give him some idea on the likely length of wait.  He receives a response from the LLAS Waiting list advising that there are four persons ahead of Mr. Everyman on the Waiting list.                                                         

Postcondition: Peter Process discusses the outcome of the responses with Mr. Everyman and they elect to wait for a vacancy to become available. 

ALTERNATIVE FLOW: Peter Process receives a response from SLRS Waiting system advising there are two other persons ahead of Mr. Everyman on the Waiting list, but Mr. Everyman’s ACCR for high care expires in one week, accordingly, should a vacancy arise he will not be eligible to be offered it (until a current ACCR) [Interaction: Waiting list  Encounter Suspend, Confirmation or Care Transfer  Promise Suspend, Confirmation].

POSTCONDITION: Peter Process immediately makes arrangements for the hospital ACAT to review Mr.  Everyman’s ACCR so that it may be provided to Alice Admitter at SLRS to avoid jeopardizing at Mr.  Everyman’s chances at imminent admission to SLRS. 

[/greybox]

Experience Schematics: Diagramming the User Experience

What does it mean to characterize an individual’s experience in relation to a specific context, artifact, or environment? Assuming it’s even possible to characterize an individual’s experience, is it possible (or even desirable) to reduce the totality of an individual’s experience to an abstract diagram on a page?

Architectural Diagrams

As a “bricks-and-mortar” architect, my training provides me with several architectural models: bubble diagrams, adjacency diagrams, site analyses, floor plans, details, perspective renderings, 3D fly-throughs, scale models, and so forth. I use these models to help shape the different experiences of the building—not just the formal shape of its massing (how the volume of its parts fit with its surroundings) as seen from the sidewalk, but perhaps the way light interacts with an individual’s movement through a sequence of spaces, or the view of a mountain range through a particular window.

In the high-tech world computer engineers think about “architecture” in terms of structures, layers, and dependencies as in Figure 1. To simplify complex systems, hardware and software architects use diagrams to focus on the functionality and relationships of system components.

flowchart
Figure 1. Hardware and software architecture diagrams.

Is such a diagrammatic approach available for user experience architects who want to do more than simply model user tasks?

Assuming such clean representations of user experiences are possible, what advantage would they provide? Can we distill our work down to these kinds of abstractions? If we can, do we risk losing some essential aspect of the experience itself? Do we risk impoverishing the end user’s experience by abstracting it to lines on a page?

Practices in both computer engineering and bricks-and-mortar architecture suggest that such abstractions do not create impoverished results:

  • Engineering systems, complex beyond comprehension, are distilled to straightforward descriptions. Engineers focus on local complexities-of-interest, letting the remainder of the system become background.
  • Architects manipulate sketches and models of complex built environments to help reduce risks of misrepresenting the final result

For the engineering audience, uncomfortable with the ambiguities of human-computer interaction, we must find a way to communicate the key elements of the experience architecture. These elements inform the hardware and software architects of the places that need to be sensitive to the experience architecture. This need is indisputable. It remains to be seen whether diagramming experiences as described in this article will achieve this goal; it is a work in progress.

Experience Schematics

An experience is an individual’s (persona’s) engagement in an environment (context) over time.

  • Experiences depend on the individual’s subjective perception of the context.
  • Time may mean mere moments or a span of years.

Experience schematics are models of experiences gleaned from research with real users. An experience schematic is composed of one or more sets of experience boundaries—representations of a persona’s interactions within a context and at a moment in time.

Experience schematics may be a useful abstraction because they:

  • Describe large-scale, high-level views of the relationships of various activities to one another
  • Permit evaluation of specific designs by helping the design team determine whether proposed interactions fit the desired experience relationships
  • Provide more general expressions of the user experience than can be provided by specific key scenarios or storyboards
  • Provide the engineering teams with a familiar method of expressing architectural issues

Because they are abstractions, experience schematics cannot fully express

  • The richness of a proposed experience—a context makes them meaningful
  • The flow of an experience—they are instead snapshots or key frames
blue square
Figure 2. A single experience boundary.

Experience Boundaries

An experience boundary (such as shown in Figure 2) describes one interaction at a specific moment in time.

Three dimensions apply to experience boundaries to create experience schematics:

  • The relative size of an experience boundary indicates the relative amount of the persona’s attention paid to the experience.
  • The distance between boundaries implies the degree of user-effort required to integrate the two experiences.
  • The saturation of the boundary’s fill-color suggests the degree of disparity between two or more experiences.

The following table articulates several arrangements of boundaries that describe relationships between two or more experiences and form an experience schematic.

Separate, contiguous experiences
Each experience is understood as distinct. While each could stand alone without dependence on the other, the two are perceived as being tightly bound to one another. Together, the two create a perceived third experience different from each alone.

The difference in size denotes the difference in user attention. The user pays far more attention to the experience denoted by the boundary on the right than on the left boundary, whether by choice or because the situation demands it

Separate, disconnected and related experiences
Each experience is understood as distinct—each could stand alone without affecting the other. When experienced together over time, the two are understood to be related: one may be derivative of the other or they may share a visual design language, for example. Both boundaries have the same saturation to denote this relation.
Separate, disconnected and unrelated experiences
Each experience is completely unrelated to the other. The two have no relationship to one another. Placement and use of different saturations denote the separation.
Separate, unrelated and joined experiences
Each experience is understood as distinct and has no relationship to the other except via a highly constrained third experience.
Separate, contained experiences
The outer experience defines and constrains the contained experience. The contained experience is completely dependent on the outer experience in its expression, context, and the other experiences with which it is associated.
Integrated, contained experiences
The contained experience is only barely distinguished from the outer experience—it shares virtually all elements of the outer experience, but is differentiated in only one or, at most, two ways. The specific boundary condition may not be clearly perceptible and may change over time or under different conditions. The core of the contained experience is distinguishable from the core of the outer experience.

While other relationships can be imagined, the basic elements in the table suffice to describe a rich set of experiences, desired and undesired.

Example

When I use experience schematics to model a persona’s experiences, I create two diagrams:

  • The key experiences for each persona
  • The key experience flow for the persona

The context of human computer interaction extends beyond the screen—so too experience schematics. They apply to any human experience the designer wishes to model.

Consider, after appropriate study of a target population’s behaviors, modeling a set of experiences around preparing breakfast in the morning, as in Figure 3.

boxes with different tasks inscribed in them
Figure 3. An experience schematic

Imagine this schematic is a generic summary of observations of several individuals for whom I was commissioned to design a new breakfast experience. Since experiences occur over time, this snapshot might never be observed in one coherent moment. Perhaps the experience from starting breakfast to completing it would require several schematics, each shifting slightly like a cel in an animation (Figure 4).

boxes with task inscribed in them
Figure 4. A set of experiences.

After additional analysis of the observations, perhaps multiple personas would appear. For each, a different set of schematics would model the desired experiences (Figure 5).

boxes with text inscribed in them
Figure 5. Experience schematic variant for a specific persona.

For this persona, the newspaper reading experience is much more tightly bound into the eating experience, and this persona doesn’t prepare breakfast or clean up.

Experience Flows

Because experiences change over time, you could imagine boundaries shifting as different elements of the breakfast preparation experience develop. Creating a stream of shifting boundaries helps visualize the flow of the persona’s experience (Figure 6).

boxes with tasks inscribed
Figure 6. Experience flow.

While many of us believe time flows linearly, experiences may be revisited. Rather than repeat a schematic, the blue arrows in Figure 6 might circle back to a previous state.

Key Experiences

Experience schematics are abstractions of persona experiences. Use them to diagram key experiences distinguishing one persona’s experience from another.

In Cooper’s design process (www.cooper.com), context scenarios describe a persona’s optimal interactions with the designed experience. The key experience is that which best distinguishes one persona’s experience from another, analogous to a key frame in a video stream.

Let’s assume my examples model a hotel breakfast. In the first experience, the guest prepares breakfast, perhaps at a breakfast buffet, or in the hotel room. In the second experience, the guest is served breakfast by a waiter.

Part of the design commission is about delivering a better news-reading experience. Figure 7 illustrates the key experiences for two personas.

boxes with tasks inscribed in them
Figure 7. Two personas’ news-reading key experience.

Seeing each persona’s distribution of attention, their distinctions among the various experiences, and the relationship of each experience to the others helps in two ways:

  • We can visualize the key differences between these personas that may inform our design solutions.
  • We can communicate to the engineering teams these key differences in ways that they may be more likely to understand.

Practical Application

I’m still working through the novelty of this approach, both for my own work and with my team. When members of my engineering teams look at these diagrams, they immediately assume I’m referring to user interface screens—their experience with UX professionals to date has focused mostly on UI design. I need to take a few moments to explain that the diagrams reflect the user experience, not the user interface for a particular software application.

Others have questioned whether these blocks represent process diagrams in which the geographical location of a block implies a chronological sequence. Except in the case of experience flows, I consider each of these diagrams a cel in an animation: the diagrams reflect a moment in time rather than a time flow. My intention for the blocks is more to show relative amounts of attention, dependencies, and relationships among the experiences, both for a specific persona and across personas.

Others have misinterpreted these diagrams as representing activities or tasks rather than experiences. Creating task diagrams of specific parts of an experience captured by an experience boundary (and similarly, creating highly structured UML representations of activities occurring within a boundary), is appropriate at a more detailed design level, but it isn’t my purpose at this level of abstraction.

From an experiential design perspective and, more critically, from a user-experience architecture perspective, the point is not to model the exact interactions going on in the experience. Rather, we aim to identify the key moments during a stream of experience to focus our design efforts around, and with luck, to impart to our users our desired, designed sensibility.

Although still a work in progress, experience boundaries and schematics are providing me with one more tool for visualizing my personas’ key experiences. In addition, they provide a recognizable format for communicating my overarching design framework to my team of engineers, designers, managers, and marketers.

Feature Fake: Exploring and Testing Connected Mobile Prototypes

Connectivity, as a facet of user experience, is no longer confined to the apps that live on our mobile devices. The widening scope of the Internet of Things (IoT) and the proliferation of connected technologies like sensors, near field communication (NFC) tags, and beacons, are paving the way for extensive, digitally-driven experiences “outside our devices.”

As these different pieces continually come together through novel and powerful ways in both consumer and enterprise environments, there is a greater need to ensure that product prototypes represent the intended user experiences as realistically and holistically as possible.

Experience designers pushing the boundaries of what mobile applications can do need agility and resourcefulness in making sure that product prototypes not only test well, but also seamlessly interact with other connected technologies. I constantly keep an eye out for the latest and greatest tools to empower my prototypes to make them as realistic as possible. Usually, these tools prove to be inadequate for prototyping robust mobile experiences.

User experience and the tech industry are moving forward at a faster pace than our prototyping tools. Successful design and development of products requires out-of-the-box thinking when it comes to prototyping. Employing a “feature fake” is a valuable method that involves exploring and testing mobile prototypes, factoring in the many unknown variables and missing pieces that make up the ever-growing connected world.

Fake It ‘Til You Make It

When it comes to mitigating the risk of failing to deliver a final product with a holistic user experience, knowing the details of the user journey is only half the battle. Creating an “authentic” prototype that best embodies the interactions from both inside and outside the app is the other half.

In an effort to make this happen, designers and developers should not be confined to delivering a fully functional product at this stage. The current prototyping tools are limited in this sense. The goal should be to represent and deliver the core functions of the applications using all forms of connected technologies available. In my experience it has been through these creative, unorthodox methods that I have been able to create “authentic” prototypes.

Working on the premise that features can be “faked” can help designers and developers even during the ideation stage. I still use conventional prototyping tools to convey flow, aesthetic, interaction, and motion. Nonetheless, it’s still easy to fall short on seeing the bigger picture because these prototyping tools are not equipped to mimic user experiences that incorporate facets of larger connected contexts.

Mobile experiences today can operate outside the app through sensors, triggers, events, and multi-device intercommunication (modern demands). Feature faking is a tried and tested practice that can redirect a product’s direction. Nonetheless, there were some cases in my work experience where these tools were inadequate.

When the Tools Didn’t Match the Job

Using only the existing prototyping tools, Flo Music (a project at ÄKTA) was a native app that couldn’t have been prototyped in a manner that would showcase its core differentiating functionality. Using peer-to-peer mesh networking, the app lets up to eight concurrent users add any song from anywhere on their phones (music streaming apps like Soundcloud and Spotify included) to a socially created playlist, otherwise known as a “Flo” (no WiFi or Internet connection required). All the users can see the playlist on their mobile device, add songs, and see who added what song(s) (see Figure 1).

Three smartphone screens that show joining a flow, connecting to a flow, and adding songs.
Figure 1. Prototype of the Flo Music app that faked connectivity for connecting to a Flo network, joining a music Flo, and adding a song.

The app offers the option to purchase music you like and the ability to prioritize your songs with “Play Next” credits. When a user’s song comes up in the queue, it will stream to the host device that is connected to the speaker source or directly to the synched mobile devices. The Wi-Fi and Bluetooth interconnections required to time-sync a song had to be done in a native development environment.

The goal of prototyping isn’t to test the quality of how a digital product’s technical features would work, it’s to generate insights about how intended users experience the product when all its features and functions are working. Designers, developers and product managers need to remember this fact when it comes to prototyping since it can reframe what kind of tools or experience simulations are needed.

The prototyping goals for this project were to identify what the ideal user experience would be like for on-boarding people into a Flo, and what potential problems would be created when people dropped off. Before developing a full-blown production version, some traditional prototyping tools were evaluated to test which aspects of the prototyping process would best represent the user experience. In terms of network music streaming, playing a white noise track that can be picked up by other nearby phones was not going to inform the development team in ways that were aligned to the experience design.

White noise is essentially monotonous without any changes in tempo or beat. Streaming a white noise track from one source in a staggered process will not actually reveal any delays or hiccups for every additional user that gets on-boarded to the Flo. With real music tracks, even a millisecond delay in the stream for every user that gets added to the shared playlist can be very distracting. Who’d want to skip a beat during a spontaneous outdoor party?

Nonetheless, such white noise generator tools don’t allow eight to 10 devices to play music from once source. They would require that all the participants manually start the track on their own. Using these tools wouldn’t have represented the product vision for Flo, which was that the app would take control of playback when tracks are added so that songs don’t skip beats when users get on-boarded to a Flo. This functionality was precisely what set the product apart from competing apps, so we needed to employ a feature fake to prototype the product in terms of how people would interact with each other, their devices, and their environments when it comes to joining a network music stream.

“Body-storming” as a feature-faking prototyping method worked by helping represent our vision for how a user joins a peer-to-peer mesh network. Generally, body-storming is done by getting a small group of people together (eight people for Flo) and then defining the physical setting wherein you envision these users interacting with your product or experience. Experience designers then map out the user journeys from the perspective of which interactions “outside the app” influences users’ behaviors in these physical settings.

The goal for the body-storming activity for Flo was to determine users’ behaviors in settings where new users would be added to the music-streaming network or leave the Flo. We wanted to understand their expectations for how the app should accommodate changes in their dynamics and then incorporate those into actual product features.

Before detailing other ways that feature-faking tactics can work, I want to mention that there are some product concepts which, by necessity, need to go straight into production since neither prototyping tools nor feature-faking will work.

I previously worked on a location-based chat app whose core functionality and experience was to allow users to create and manage geo-fenced messaging groups. Essentially the app lets users draw boundaries of geo-fenced zones and then identifies the presence of friends, co-workers, family, and others, within those zones. The intended experience was to allow users to communicate to anyone within the geo-fenced zone at any point. A user who is already at home could still send messages to their “work” geo zone so that people who are still within the zone will receive the message.

This was a very difficult design challenge without having a working version of the app. It would be impossible to create a low-fidelity prototype which could reasonably imitate the requisite real-time communication and location detection. If anything, this case demonstrated that not testing product prototypes does not bode well for the product launch. After having a working “product” in native code, the reaction from users was that the app was confusing, and detecting who received a message and how they responded was disorienting.

When Faking Is At Its Best

When is it most appropriate to “feature fake?” There is no single universal answer that applies to all products, but I recommend cases where the team has control over all the stages of the product development. In that case, determining when the prototype is ready to be tested or demonstrated (planning, scripting, and additional hands if needed), can be controlled.

There are many capabilities and features that can be provided by modern technologies. The next section will examine some notable strengths of different devices—geo-fencing, BLE, wearables—and awareness of details.

Onboard Sensors

The prototyping tools currently available to interaction designers are simply not capable of accessing onboard sensors. These include cameras, microphones, gyros, and others. The simplest way to feature fake a camera when it needs to be incorporated as part of a task flow is with an animated gif or video. Typically, the participant in a usability test is instructed to point the camera in a specific direction. The animated gif or video should be prepared with the likeness of the particular scene.

Bluetooth is a sophisticated technology available with many mobile devices. It provides the connection between mobile phones and wearables or even other mobile phones. When a product has a wearable component extended from a mobile app, it is not possible to demonstrate the interoperability with current prototyping tools because they are simply unable to create that connection.

This situation required a creative solution. A wearable device was paired to a device controlled by the research team. To the participant, the device appeared to be in standalone mode. The task was to discover a nearby business and once located, walk there. The wearable was set to accept screens sent from the facilitator’s device. The participant was instructed to use a “talk aloud” protocol to convey their actions. A tap on the facilitator’s master device would send a URL of a screen, which would be rendered almost immediately on the participants’ wrist device. The wearable was fully capable of interacting with taps and gestures like any HTML5-based interactive prototype.

A company that made security latches was looking to evolve their product into a digital, connected latch. Working with only sketches and written feature maps of the pre-development product, ethnographic research, and later, usability testing was conducted. The latch would require a responsive touchscreen. Early exploration lead us to consider an Arduino-like device, which is essentially a microcontroller board (or a mini-computer) that can be used to prototype experiences that involved analog and digital interfaces.

The product needed to be portable and the Arduino with an LCD touchscreen and battery pack wasn’t small enough. Additionally, code would need to be written in order to get it to perform as expected. That made it a less practical option. Ultimately, the form factor simply wasn’t believable and made the latch unrecognizable in comparison to the established, traditional product on the market.

Instead, an old functioning Nexus One was enclosed in a 3D printed case and used for the presentation. It contained a touchscreen, onboard battery, and the sensors required to detect location and provide connectivity. The participants understood that the product was in the early stage of development, but none of them were able to recognize that an obsolete smartphone was the brains behind the latch.

Smartwatches

Smartwatches are here now and more are coming. Several major companies currently have a model on the market. New apps and updates of existing apps are being developed to be smart watch compatible. A major use-case for smartwatches is providing notifications. The challenge is to determine what information is worthy of notification. If the app becomes too “chatty,” it gets silenced—or even deleted—and possibly replaced by a competitor. If an app isn’t providing relevant notifications, then user reactions or experiences are missed opportunities. One solution would be to conduct a “walk and talk” research exercise. However, there isn’t a prototyping tool which can simulate a day in the life of a smartwatch wearer without employing native code.

Once again, the facilitator’s phone was used to send data to the participant wearing a smartwatch. This method has proven so successful it has become our standard practice. In this case, it was the notifications that were simulated. As we walked through a normal day, something was needed to elicit the participants’ attention. There are no apps, libraries, drop-ins, or prototyping tools that provide remote vibration functions. Instead, a vibrating bracelet was used, whether for assistive means or training purposes. By strapping on a second bracelet adjacent to the smartwatch as shown in Figure 2, it was nearly imperceptible which wrist device emitted the vibrating or audible alert. The interaction could be controlled by the facilitator by sending screens to the watch display and then sending an alert for the participant to view.

Fake Here, Fake There, Fake Everywhere

There are many other ways features can be faked. Designers, developers, and other people involved in developing products should always be on the lookout for unexpected methods and measures to create representations of how mobile interactions can extend far beyond what users can do within their apps.

Recently, there have been a number of promising developments with prototyping tools which show a future with advanced capabilities. Tapping into Bluetooth emitting beacons from HTML5 and JavaScript is one such advancement. It may require that prototypers get more involved in the prototype, but the alternative is working with native code development. It is my hope that soon prototypes can interact with an API simply through a WYSIWYG-like environment.

If there’s anything that can be learned from opportunities to feature fake, it is that the current tools are inadequate for creating pre-production prototypes that more closely resemble the possibilities of real-world mobile interactions. Given this constraint, designing and building products will be limited in what can be achieved by prototyping mobile technologies.

Avoiding Narcissus: Finding inspiration in Unusual Places (Book Review)

Book coverA review of
There’s Not an App for That: Mobile User Experience Design for Life
By Simon Robinson, Gary Marsden, Matt Jones

Morgan Kauffman

The cover of this fascinating and thought-provoking book provides a cautionary tale the authors want to help designers avoid. You may remember the myth of Narcissus, who, upon seeing his reflection in a pool of water, became so entranced by it that he drowned in his own image. The cover shows a modern-day Narcissus (based on the image by Caravaggio) peering into a wheel of screens of mobile devices with colorful and enchanting apps. He looks as if he’s about to fall into the digital experience. It is a wonderful image that seems to ask if we are all about to fall into our own screens, to be tugged away from the real world by them, never to emerge, like Narcissus. This is exactly what the authors of There’s Not an App for That: Mobile User Experience Design for Life try to explore.

The authors, Simon Robinson, Matt Jones, and the late Gary Marsden explore ways to build on human experiences—outside the screen—in order to enrich the user experience of mobile devices. This might not seem to be a novel concept, except that human experience is often not used as a source of inspiration. Instead of mimicking human tasks, either broad, like “communication,” or narrow, like “texting,” the authors draw us out from behind the looking glass and invite us to consider the wider world. Inspiration can come from so many places if we don’t keep our heads down, focused on our screens and instead, look up so we see the physical world in all its richness along with the social interplay of life. The authors give great examples of inspiration from food, fashion, fitness, and even from mess and uncertainty. They discuss how a design can enhance mindfulness and, perhaps most importantly, how to design for different levels of literacy and for sharing of information. This is important to remember because in many places mobile devices are shared.

How can we design applications that don’t drag us away from life, but instead blend physical and digital in ways that engage us in the world around us? This book takes you on an illustrated tour. Even more important than the specific examples the authors give are new mindsets, for example:

  • Moving from touch screens to feeling augmentation
  • Looking up from heads-down to Face On
  • Creating systems that derive inspiration from clutter rather than simply helpfulness
  • Designing for public and social use
  • Assisting people’s mindful interactions instead of the distracted, distant interactions we often have when using mobile devices
  • Creating systems that help everybody regardless of literacy or where they live

One of the things I liked most about this book is the way it poses questions to ponder. It suggests alternatives and asks us to challenge not only our own thinking, but the authors’ thinking as well; a novel approach to stimulating thought. It provides a rich collection of different ways of thinking about the design of mobile devices. It also includes numerous “Design Pointers.” These challenge us to think about how the topic can apply to real-world design issues and to think “outside the room” (as the authors put it), rather than providing the typical “tips and tricks” that are commonly found in books about design.

My favorite section (perhaps predictably) is how our designs can make a difference in the world. To accomplish this you have to actually be out in the world and partner with local populations to create apps, or rather systems, that can improve people’s lives and allow sharing that is accessible and low-cost for even poor and illiterate users. This is a tall order, but the authors have lots of experience doing just this. For example, before his tragic death, when Gary Marsden was a professor at the University of Cape Town, he and his students designed countless systems based on mobile devices that have had a positive impact on real people who are poor and in need. Co-authors Jones and Robinson have also had experience designing for the so-called “developing world.”

There is passing mention of how we need to re-examine our tools and methods when doing this type of work because these methods have been developed to create Western-oriented apps and, in many cases, they simply don’t work outside the so-called “developed world.” This is a complex and important area that warrants a great deal of attention, although it is outside the scope of this book. Still, I am grateful that the authors raise the issue because it is clearly something we need to address as a profession.

Ultimately, the authors are attempting to help us build apps and mobile-based systems that can also help us—as users—to remain engaged in the fullness of life. If you’re looking for quick cut-and-dried pointers, this is not the book for you. But if you want to think deeply about the way you design and how you might do it differently, you won’t be disappointed.

[bluebox]

Excerpt: Provoking new thinking

Throughout this book we’ll be trying to get you to think of alternative ways of presenting content and interacting with your users.

One technique that can help generate interesting deviations from the norm, whatever the platforms you are building on, involves imagining a world where certain characteristics commonly taken for granted are removed.

So, what about the world where you can’t see anything any more, or your sight is partial? What would your mobile device feel like then? A research team in the UK came up with the Haptic Lotus, shown below, through just such an experiment.

A white plastic device with a center “body” and folding petals sits in the palm of a hand.

The Haptic Lotus is designed to be held in both hands, its petals opening and closing as it gets nearer or further away from a target location. The team deployed the device as part of an “immersive haptic theatre experience,” where audience members explored a pitch-black room carrying the device. While there were many fascinating insights from the work, let’s pick out just two of the comments from people who took part:

“The device was like a purring cat, or a pet.”

“It was interesting to have something ‘alive’ in your hands. It was companionable.”

Using Bret Victor’s inspirational piece as a starting point, we’ll be looking at how to break the dullness of glass screen prods to develop designs that are more “‘alive’ in your hands.”

Problem 1: From Touch to Feeling

What’s The Problem?

Digital interactions through mobiles are an increasingly prominent part of day-to-day lived experience. But what are they doing to the richness of this everyday life?

Our starting point, in this first Problem, is to pause for a moment and think about the extent to which the smooth glass of our phones, which separates us from the digital world inside, numbs or dulls, rather than enlivens. As you’ll see as you read on, this book is a celebration of what it is to be truly alive—to revel in the complexity, ambiguity, messiness, and stimulation the world provides.

Why Should You Tackle It?

If we look away from our interactions with gadgets, we see inspirations for what mobile experiences might be both now and in the future. We see a world of multisensory beings that taste, smell, see, and feel the world. Sometimes we are hit with a double espresso jolt of life—think of the pain of falling off a bike; other times we feel it much more subtly—as a gentle breeze brushes the hairs on the back of your neck. We live in a world where emotion is as important as efficiency.

We also experience a world that we can shape and manipulate through an equally broad spectrum of actions: from demolishing a wall with a sledgehammer to creating beautiful origami with deft finger folds.

Our challenge to you here, then, is to consider how these human skills can be put to better use, and inform the interaction designs we make both today and on the devices to come.

Key Points

  • “Touch,” as in “touch screen,” is a limited design resource compared to what humans are capable of in terms of the ways we can sense, respond, and manipulate.
  • We have been built for physical materials; digital materials currently lack many qualities to enable us to fully engage with them.
  • When we think about the physical world, we are reminded that not every interaction is pleasant, calming, and joyful. Facing up to a spectrum of emotional responses can introduce new thinking to interaction design.
  • Research labs and visionary designers have been exploring how to break through the glass to create digital experiences that engage better with these multisensory, emotional, and multi-manipulator abilities.

[/bluebox]

How to Make Motion Design Accessible: UX Choreography Part Two

This article is a continuation of “Motion Design: An Intro to UX Choreography 

Since I published my last UXPA Magazine article on motion design about a year ago, I’ve had many conversations about how to make motion design accessible. Accessibility is not only a hot topic in my office, but also a design requirement that we discuss nearly every day. That’s because I work for Alaska Airlines, and since 2016, all US-based airlines are required to make their websites accessible.

In the first article, I made the case that motion design is not simply decoration, but carries huge user experience implications because it directs spacial understanding, hierarchy, communication, and emotional impact. Motion design is used in macro-interaction models, transitions, micro-interactions, branding elements, and more.

While motion can greatly strengthen design when done properly, it can have a huge impact on accessibility. Accessibility is a unique and rewarding challenge that ultimately improves the overall design of a system. It’s an important facet of inclusive design, and ensures that all people, regardless of ability level or technology, can use your system with as little friction as possible. Although it can seem like a hindrance at times, designing motion for accessibility will ultimately push you to be more thoughtful and thorough.

Accessibility Factors to Consider

People experience the internet in many ways. For a majority of people, they see the visual design on a screen and use peripherals or touchscreens to take action on a system. But for people who are blind or have vision challenges, need assistive technology to interact with a system, have vestibular disorders, or have spacial comprehension challenges, interacting with a system that most people find easy can be challenging or impossible. Motion design adds a layer of complexity to that, so it’s important to understand how motion design can negatively impact some users in order to design with those challenges in mind.

The most common accessibility challenges in motion design are listed below, but keep in mind that there may be other considerations depending on your specific use cases and industry.

  • Screen reader performance. Blind or low vision users often use assistive technology, such as screen readers, to get information online. How well the screen reader works depends on how well the code on a page is marked up. Motion and other micro-interactions can trip up screen readers, cause information to be read out of order, not to be read at all, or cause delays in system processing time. Specifically, screen readers can miss events like partial page loads, temporary animated overlays (like toasts or banners), and animated state changes. When that happens, users can miss critical information—a situation that can be harmful, or at least extremely frustrating.
  • Vestibular disorders. Vestibular disorders affect the parts of the inner ear and brain that process the sensory information involved with controlling balance and eye movements (source: vestibular.org). People with vestibular disorders may experience motion design in an interface to be very unpleasant, nauseating, and/or disorienting to various degrees.
  • Object association. Motion design that is either too drastic, fast, or both can cause confusion and add to a user’s cognitive load because they have to work to figure out what just happened.
  • Screen zooming. Users who have low vision but are still sighted may sometimes use screen magnification to read text or look at images more clearly. Depending how much the screen is zoomed, or how responsive the site is, will cause more or less of a portion of the screen to be out of view. If animated events like a notification or toast come in and appear off screen, the user may not see it. Additionally, if an animation event occurs in a zoomed portion of the screen, the animation may take up most of the viewing area and be visually overwhelming to the user (see Figure 1).
Animation showing zooming in on a screen
Figure 1. When a user magnifies their screen, most of the viewport can go out of view.

The most important thing to remember is that everybody experiences the internet differently. Just because something may be routine or easy for you doesn’t mean it is routine or easy for everybody. By understanding some of the big challenges people face with everyday web design, you can start to be more sympathetic and thoughtful in your design choices.

UX Choreography Key Functions

When planning motion design for user experience, I like to think of it as “UX choreography” so that I remember its role in UX design. UX choreography is different than animated videos or other artistic expression. Motion design in an interface is meant to reinforce hierarchy, strengthen core communication, and reduce cognitive load for the user. The following three key functions should be considered for every motion design project and be well understood.

  • Spacial motion design helps orient a user. Spacial motion design applies our basic understanding of physical space and translates it into a digital system to help the user remember where they are, how they move forward, and how to get back to where they came from. For example, if page transitions always swipe from right to left as the user advances, their mental model would likely anticipate that going back would make the screen swipe the opposite direction, left to right. When done well, users don’t have to think about spacial motion design. But if the motion is counter-intuitive, it can be jolting.
  • Functional motion design reinforces hierarchy, attention, hints at micro-interaction possibilities, provides system feedback, and more. It includes mostly micro-interactions, but also includes the motion-based relationships between sections, components, and drawers—like when a user clicks a filter control and the rest of the page rearranges as a result. For example, if a user clicks on a submit button and a loading animation signals that the system is processing their request, a motion like a loading bar or spinner wheel will help the user to understand what is going on and how long it might take.
A visual bar fills with color to show progress
Figure 2. Example of an animated loading bar, which is common to convey system processing status.
  • Delightful motion design is the little bit of polish, branding, and personality that is layered into the interface to give the user a sense of feeling or excitement about using a system. Delightful motion design should be used very sparingly and strategically, but when it is done well it can make the experience memorable and special. Some brands like to hide “Easter eggs” in their delightful motion design opportunities, or may feature their mascot, logo, or other brand features. Delightful motion design is best used when the user isn’t being asked to process a lot of information, so loading screens, 404 pages, or other moments of cognitive pause are great opportunities for something special and delightful.
A finger points to a switch. The arm drips with sweat.
Figure 3. MailChimp uses a delightful piece of animation featuring their chimp mascot, but only when the system is processing or you reach a 404 page.

Accessible Motion Design Recommendations

In addition to the three key UX choreography functions, there are many ways you can adjust motion design to make it accessible. These recommendations are a technology-agnostic guide of best practices that should make up the foundation of your motion accessibility strategy.

  • Parallax is awful. Parallax is an optical illusion to give a sense of depth. It is rarely done well and even more rarely accessible. Avoid it unless absolutely necessary, and use it sparingly if you use it at all. Parallax can cause nausea or disorientation. In fact, when Apple’s iOS 7 first came out featuring parallax backgrounds, there was an outpouring of requests to be able to limit motion in the interface (which Apple quickly released).
  • Keep motion design to less than 1/3 of a viewport. Motion design that is large or fills the screen can disorient and nauseate people. Keep any animations small and simple, and make sure that they scale with any responsive adjustments for different size screens.
  • Let users trigger motion. Motion design is best when it is subtle and happens after a user has taken an action. Motion design that auto-plays or is unexpected can be unpleasant for many people.
  • Keep animation at the point of focus. Animation should be used to direct a user’s attention to the most important thing on the screen that they may need to interact with. If the primary function on a screen is static and a lot of unrelated motion is happening elsewhere, users can be distracted and confused.
  • Offer settings to reduce motion. If you’re designing for a native app or are supporting certain browsers that allow for reduced motion settings (Safari is releasing better reduce motion tools within the next year, and other browsers are likely to follow), consider designing an easy way for your users to reduce or turn off your motion design. Just be sure that if the user disables motion that they can still understand and use the interface without it.
  • Use motion design to make the interaction easier to understand. Motion design can either increase or decrease cognitive load, depending on how well you design it. Motion design that distracts or overwhelms a user will add to it, whereas subtle motion design that is used to communicate system processing, feedback, state changes, or next steps can have a positive effect. It’s a great idea to usability test your interface with users with motion-related disabilities to get well-rounded feedback.
  • Test everything with a screen reader. Sometimes it’s hard to predict how a screen reader will handle micro-interactions (like in-field form validation) and partial page loads. Testing a system with a screen reader (and with screen reader users) can help you catch issues that may arise. Since screen reader performance is only as good as how the page is coded, work closely with your developers for this step and it will make it much faster and more accurate.

Motion design is a great example of the idea that designing for accessibility will make the designs better for everyone. At first, meeting accessibility standards may seem like an unwanted constraint to your design ideas, but incorporating good accessibility practices into your motion design workflow can help you add polish and maturity to the overall system in a way that is useful for all users.

TV or Not TV: Designing a Motion-Based Pointing Remote Control

A mother and child lying in bed watching TV and using a remote control.
Watching TV

The television has evolved substantially since it was introduced more than half a century ago: black and white to color, analog to digital, CRT to flat screen, and 2D to 3D. Content sources have evolved, too, from a dozen broadcast channels, to thousands of digital channels plus DVR recordings, video-on-demand libraries, and even the Internet. But through these many transformations, the remote control has changed very little. New buttons have been added to support new features, and designs have become sleeker with better ergonomics. The control paradigm, however, remains the same: dozens of buttons to control the TV functions, and up, down, left, and right arrows to navigate the screen (see Figure 1). The TV remote control is needlessly complex, and navigating through content choices one item at a time is inefficient and cumbersome. The TV user interface (UI) as a whole does not meet the needs of today’s TV users.

A pile of remote controls with many buttons.
Figure 1. Standard TV remote controls

Hillcrest Labs has embarked on a mission to solve this problem. Frustrated by the inadequacy of the TV user interface, we found ourselves asking, “Why can’t I just point at the screen to get what I want?” After all, pointing is an innate human behavior to select things. That question led us to design a pointing UI for the TV, which was a collaborative effort that involved many disciplines, including human factors, industrial design, UX design, software, hardware, and sensor engineering.

Understanding the TV User Experience

The first step in the process was to experiment with pointing at TV using off-the-shelf equipment. A PC was attached to a TV and used to run early prototypes of applications, such as a mini-guide and video-on-demand, controlling the experience with a mouse. There was a great deal to leverage from the personal computer and its mature pointing interface, but we didn’t want to replicate the PC user interface on the TV. We wanted to design specifically for the TV experience.

TV Usage is About Entertainment and, Hence, “Lean-Back”

Viewers expect the TV to deliver entertainment content, mostly in the form of video. Although the TV can provide information, enable communication, and also offer productivity functions, these experiences should fit within the context of a video-oriented entertainment experience.

Watching TV is generally associated with relaxation and comfort. The viewer is typically seated on a sofa or lying on a bed with the expectation that the content will be delivered to them. This mode of usage is passive; the viewer is only active long enough to select a program, but then passive while viewing the content. The user expects to exert minimum effort to control the experience.

In contrast, computer usage is a “lean-forward” activity. The user actively seeks information and tends to multi-task. That is not to say that TVs are never used in a lean-forward fashion and computers are never used in a lean-back way. Video consumption on the computer and gaming on the TV break these patterns, but each has a dominant use case that drives design decisions.

TV is a Communal Device

TV is typically placed in a room where the family congregates, and viewers are accustomed to watching TV in a group setting. Selection of content and control of the user experience are subject to group dynamics. There is one remote control and, much to the chagrin of certain family members, it must be shared. In contrast, computers are personal. Even when shared in a family, only one family member at a time uses the computer.

TV is Viewed and Controlled from a Distance

TVs are often placed across the room from the viewing area. Therefore, watching TV is typically done from a distance. Because TVs are also placed in kitchens, bedrooms, and workout rooms, controlling the TV should be possible from a variety of different positions, such as when sitting, standing, or lying down, and at different angles and distances. But in all of these settings, the viewer is looking up at the TV screen. Therefore, controlling the experience should not require looking down or away from the TV.

User Interface for the TV

Understanding the context of the TV experience meant designing a pointing UI specifically for the TV. Inspired to transform the TV experience, but also cognizant of the need to please the proverbial “couch-potato,” we got to work.

The design of the pointing UI involved both the input device and the Graphical User Interface (GUI). Designing the two components together allowed us to define an interaction system holistically from the ground up.

A pointing UI allows the designer to trade off hard buttons for on-screen controls. Thus, a pointing remote control can be designed with very few buttons and still enable the full range of functions TV viewers expect. Our research had shown that consumers were frustrated by the typical 50+ button remote control, so we designed an interaction system using only five buttons (see Figure 2). We decided to use the power of pointing to simplify the experience and deliver just what the user needs, no more, no less. Quoting John Maeda from Laws of Simplicity, “Simplicity is about subtracting the obvious and adding the meaningful.”

Hand-drawn design sketches of remote controls
Figure 2. Sketches of several pointing remote controls

Pointing Remote Control

A pointing remote control must allow the user to remotely move the position of a cursor on a TV screen. The pointing solution should not require work, and should be controlled with one hand with no surface. The remote control should be designed so that it can be used comfortably in a variety of positions, and it should work within a reasonable range of angles and distances from the TV.

A number of pointing technologies were evaluated, such as a joystick, trackball, touch-pad, and in-air motion-based mice. We found motion control provided the most natural interaction. It works by gently waving the hand in the air to position the cursor, much like actual pointing. Early prototypes of motion pointing devices were tested extensively with users. Some of our research included contextual inquiries where we tested our prototypes and UI concepts in users’ homes within their TV-viewing environments. We also ran many tests in our “living room” lab.

Our user research validated that motion pointing is, indeed, an intuitive control method for the TV, and feedback from users allowed us to refine our solution. We found several factors to be critical when applying motion control technology to the TV remote control:

  • Immediate feedback: To make the interaction system intuitive, we had to ensure it was readily apparent how to use it from the moment the user picked up the remote control. We designed the pointing interaction so that the cursor appears in response to the user picking up the device, eliminating the need to press a button. Turning the cursor on with an explicit button press increases the effort to learn the interface for novice users and increases the cognitive load for repeat users.
  • Pointing without pointing: We found that having to point directly at the screen could be tiring and not always comfortable in typical TV-watching situations. We wanted the user to be able to relax his or her arm and not be forced to point at the screen. So, we designed a relative pointing system using inertial sensors. The user can point a relative pointing device in virtually any position without the need to point explicitly at the TV. This approach is in contrast to absolute pointing using optical sensors, where the user has to explicitly point at a camera mounted near the TV.
  • Tremor reduction: Tremor is the involuntary oscillation of the hand which is natural for humans and can vary from person to person. A motion device will respond to these small oscillations and move a cursor on screen. We found that it is critical to the usability of the motion remote control to cancel the effects of natural tremor and maintain the position of the cursor.

There were many other considerations to the design of the remote control, including power consumption, ergonomics, and button design.

Graphical User Interface and UX Design

The design of the TV GUI and applications should fit the lean-back, entertainment-oriented context of the TV. The primary use case is to watch video so controls should be arranged on the edges of the screen with minimal coverage of the video. Ideally, controls should only appear when needed. We decided to use motion to activate the UI and make icons visible. Explicit motion indicates that the user is about to engage with the system, likewise, lack of motion indicates that the user is consuming content and controls must get out of the way.

In order to use a pointing interface for the TV, the visual elements of the GUI must be designed to be both viewable and selectable from a distance. The Society of Motion Picture and Television Engineers (SMPTE) provides guidelines for TV screen size and distance to the screen. We used these guidelines to arrange our living room lab setup and fine-tune icon and font sizes for optimal visibility for different users and different TV-size-versus-distance configurations.

We developed tests based on Fitts’ Law to size icons for the best point-and-click performance. A Fitts’ Law test presents targets of varying sizes appearing at random locations on the screen and measures the time it takes for the user to move from one target to the next and acquire that target. The larger the icon, the easier it is for the user to acquire, so the key is to find the minimum size that achieves the desired performance.

Since motor skills and manual dexterity vary by age, we tested users in three different age groups: children 5-11 years, whose motor skills are still under development, adults 65-79 years, whose manual dexterity may be subject to degradation due to aging, and adults 18-60 years who are not subject to age-related limitations. Using data from these tests, we were able to characterize pointing performance for different target sizes and age groups, and derive best practice guidelines appropriate to the pointing technology.

Recommendations and Guidelines

We encountered many interesting and challenging problems as we designed a new user interface for the TV. There is a lot more to designing for the television and to motion-based input devices than included in this article. But the following are a few recommendations:

  • Keep in mind that video consumption is the primary use case.
  • Avoid clutter on the screen and minimize coverage of video.
  • Minimize the cognitive load and be mindful of the passive, lean-back experience.
  • Deliver interactivity as an opt-in experience, not an opt-out.
  • Because users are used to movement on screen, avoid static screens and use scaled video windows, motion backgrounds, and animation to create movement.
  • Use fonts that render well on the TV and appropriate font size for legibility.
  • Use icons that are easy to recognize for typical TV-viewer distances.
  • Use Fitts’ Law to find the minimum acceptable icon size to make the point-and-click interaction sufficiently easy.
  • Remember that TV is a “heads up” experience; allocate only high frequency functions to the remote control so that the user can keep attention on the screen.
  • If considering motion pointing, design for the lean-back TV experience and not a lean-forward gaming experience.
  • Consider relative pointing and account for tremor.

电视遥控器用起来很让人恼火,原因至少有两个。首先,它的按钮太多。其次,通过上下左右键来导航数千种节目选项,每次只能移动一个项目,繁琐得近于荒谬。尽管有了技术进步并且节目内容发展迅速,电视用户界面却保持着相对落后。这篇文章介绍基于一种指点操作设计的新遥控器,包括关键的设计考虑和经验总结 。

文章全文为英文版TV 리모트 컨트롤은 적어도 두 가지 이유로 그 사용이 실망스러울 수 있습니다. 첫째, 너무 많은 버튼이 있다는 이유입니다. 둘째는, 방향버튼으로 한 번에 한 항목을 이동하면서 수천 개의 콘텐츠 옵션을 탐색하는 것은 터무니없이 성가시다는 것입니다. TV 사용자 인터페이스는 기술적 진보와 급속한 콘텐츠 성장에 비하여 상대적으로 진부하게 남아 있습니다. 본 논문은 핵심적인 디자인 고려 사항과 얻은 정보를 포함하여 포인팅을 기반으로 하는 새로운 리모트 컨트롤 디자인에 대해 설명합니다.

전체 기사는 영어로만 제공됩니다.

O uso dos controles remotos de TV pode ser frustrante por pelo menos dois motivos. Primeiro, eles têm muitos botões. Segundo, navegar por milhares de opções de conteúdo apertando botões direcionais para cima, para baixo, para a esquerda e para a direita para mover um item por vez é absurdamente  trabalhoso. A interface com o usuário da TV ainda é relativamente primitiva apesar dos avanços tecnológicos e do rápido aumento de conteúdo. Este artigo descreve o projeto de um novo controle remoto baseado em apontar, incluindo as principais considerações de projeto e as lições aprendidas.

O artigo completo está disponível somente em inglês.テレビのリモコンの使用者は少なくとも2つの理由からリモコンに不満を持つことがある。第一の理由は、ボタンが多すぎるということ。第二の理由は、上下左右に一度に一つずつ移動するボタンを使って、何千ものコンテンツの中をナビゲートするのがひどく面倒だということだ。テレビのユーザインタフェースは、テクノロジーの高度化やコンテンツの急速な増加にもかかわらず、比較的昔ながらのかたちをとどめている。この記事では、デザイン上の考察や教訓を含め、ポインターをベースにした新しいリモコンのデザインについて述べる。

原文は英語だけになりますUtilizar un control remoto de televisor puede ser frustrante por dos motivos. Primero, tienen demasiados botones. Segundo, navegar por miles de opciones de contenido presionando botones direccionales hacia arriba, hacia abajo, a la izquierda y a la derecha para desplazarse por un elemento a la vez es absurdamente engorroso. La interfaz de usuario del televisor sigue siendo relativamente primitiva pese a los avances tecnológicos y el rápido crecimiento del contenido. En este artículo se describe el diseño de un nuevo control remoto que se basa en apuntar, incluyendo consideraciones de diseño claves y las lecciones aprendidas.

La versión completa de este artículo está sólo disponible en inglés.

Forms on the Go: Usable Forms for the Mobile Web

As recently as 2006, it was advisable to avoid forms altogether when designing a mobile website. Multi-step, link-driven selection list wizards (Figure 1) were considered better practice, as implementing and using forms on the mobile web was considered overly difficult. This led to long pathways to information and an ineffective user experience for people in a mobile context.

pictures of cellphones at different steps of a task
Figure 1. Many steps are required to make a choice.

There have been gradual improvements in the usability of forms on the mobile web. The arrival of the Blackberry provided some improvement on text entry, which for users has always been and remains a challenge. The iPhone has seen an improvement in the presentation of forms, along with other standard mobile web elements that assist users in presenting familiar UI controls with established interaction methods. Such phones are driving mobile web use disproportionate to their handset market share. Nonetheless, there is still a great diversity of handsets accessing the Internet, and their differences are noted by some as a hindrance to the development of the mobile web.

This article describes some of the specific lessons learned designing with forms as primary interfaces for the Yellow™ and Whereis® mobile sites at Sensis in Australia between 2006 and 2008. It also outlines the evaluation methods we tried and had success with. (Yellow allows people to search for businesses around a location and Whereis provides maps, directions, and map-based search.)

Keep It Simpler

Unless you’re designing for a specific device like the iPhone, you’ll most likely be designing for different browsers and screen sizes, as well as different interaction modes such as touch screen, stylus-driven, scroll wheel-driven, and basic click and select. Each of these mean different methods of making a text field active, expanding a dropdown list, or highlighting a radio button for selection.

Text entry is generally the most difficult interaction a person will encounter on a mobile form.

Form-based input requires considerable attention and dexterity. One of the challenges of designing the homepage for Yellow was balancing the amount of information required on the homepage with the number of steps required to view a results page. We decided to have location entry on the homepage to try to display results as the next page, creating a complex entry point to the site.

Design and Layout

The majority of mobile sites use a vertically stacked layout; it supports the way most people read the mobile web page and was the easiest to implement across multiple devices. Form design follows suit; labels, form fields, and actions are best arranged stacked vertically.
Problems with non-stacked layouts include the impact of limited screen width on text entry fields or drop-down lists, and the difficulty in consistent formatting across a range of devices. Where we did implement adjacent elements, such as on the Yellow homepage (Figure 2), lab participants showed a tendency to overlook the right-floated Name option button when asked to recall the elements of the page. We assumed that this method broke a model of scanning the left-side of the page for cues about the purpose of the page, then honing in on the key elements.

Yellow homepage screencap
Figure 2. Users overlooked the Name radio button.

OK/Cancel: Action Stations

Two common usability issues we observed were associated with form submit buttons: difficulty identifying the highlight state, and difficulty with selecting form buttons adjacent to text fields.

Many phones will accommodate the use of graphical form buttons via CSS. Implementing custom graphical buttons is worthwhile, as most standard browser controls are not visually prominent or large enough to be easily discernible and selectable on a range of mobiles. Figure 3 shows a custom CSS Search button on the Sensis Mobile site; the button is active and the highlight state is visible thanks to color contrast, but adding padding around the button would increase the prominence of the highlight state.

screencap of screen with search button
Figure 3. The Search button is prominent but would benefit from more space.

A poor example: Telstra’s location preferences page (Figure 4) includes a Submit action rendered as a link. An opportunity to visually emphasize the key action on the page is missed, and Submit blends in with the navigation links.

web form
Figure 4. Submit as a link lacks emphasis.

As Luke Wreblowski noted in Web Form Design, visually deprecating or removing the Cancel form button can help emphasize the submit action, and in the mobile web there are several other reasons why secondary actions are not useful and should be avoided:

  • We observed many users use the browser Back button to navigate away from pages, rather than using on-page buttons. The browser Back button is one of the most accessible controls on a mobile browser, and will usually have a button dedicated to it; on screen controls do not compete.
  • Some phones that navigate by selecting elements up and down the page do not always select the element that is left-aligned, or the position of the previous element can determine that the right-aligned element is selected first. This can increase the chance of people selecting Cancel in error—disastrous if they lose entered text or a series of selections. Figure 5 illustrates a Cancel button automatically highlighted; if selected this would undo all the text entered on this page.
  • Cancel gives no indication of the destination page, and people have a reluctance to click on something that may take them further from their task. Figure 6 shows an example where the link Return to map replaces the Cancel button.
8-2-Green-fig5
Figure 5. Cancel can undo a lot of hard work.
screencap of web form
Figure 6. Return to map is superior to Cancel.

Flexible Inputs

In a recent review of Amazon’s mobile web and mobile application offerings, Joel Pemberton of Punchcut notes that one of the distinct advantages of the mobile application is that it remembers your login, something the mobile browser Safari does not do. Many phones still do not support cookies, or provide browser-based password lists for login recall.

There are also times when a mobile website can pre-populate form fields based on previous entry. This can be a disadvantage for people if they have to delete unwanted text with individual keystrokes, or by drag-selecting all with a stylus; both tasks are intensive interactions.
Yellow mobile used pre-population on the last entered search term—the rationale was that building search queries (appending another keyword to narrow the search) or correcting search queries (spelling, plurals/singulars) would be common tasks.

Whereis mobile also pre-populated the address entry page, the rationale being that the state and suburb information could be reusable, and that retaining the street number and name information would provide continuity and avoid confusion for people affected by different presentations.

Detailed statistics would have validated these assumptions, but these specific statistics were not available to us at the time.

Select Lists, Radio Buttons, and Checklists

As with web browsers, form elements will be rendered differently across mobile operating systems and browsers. This means people may not recognize form elements they have previously used on the large-screen web, nor quickly understand their function.

Figure 7 shows how Nokia Series 60 browsers render an active drop-down list—in this case allowing multiple selections. Knowing the phones you’re building to handle these form elements is very useful in anticipating the problems people may encounter using them. In this case, the UI is styled according to the operating system, not the website, and hence can be visually incongruous. Additionally, the search field below the selections provides search capabilities within the drop-down list, and we observed numerous test participants mistake this field for a broader site search. Being aware of problems like these can help determine whether a specific form element is likely to be effective.

screencap of web form
Figure 7. Different visual appearance and controls imposed by the browser can be confusing.

Usability Testing Methods

Indoor/Outdoor

We predominantly tested our applications with participants sitting down in the lab. We captured the actions of each participant on ceiling-mounted cameras—one of which would capture a tight shot of the handset to record the actions of the participant. This generally would not record the screen contents very clearly, so we tried an iSight camera on a gooseneck stand that would sit above the handset, allowing the participant to move the phone with some freedom. To avoid participant discomfort from holding their hand in one place during most of the session, we framed the shot a bit wider, and put markers on the table to show participants where they were in the picture. This area was about the size of an A3 sheet of paper and provided some sense of natural movement.

Numerous mounts for phone and camera are readily available in stores and can be set up easily. We recommend that the setup capture both the actions on the keypad and on-screen, with a second camera capturing larger gestures, such as when participants need to bring the phone closer to their face.

In early 2007, a Sydney company, eyetracker, performed a proof of concept with us on their head-mounted eye-tracking system. We used internal participants and sent them outdoors to find some local businesses. The results were interesting: we quickly got a sense of how little attention the mobile phone could be afforded when actively negotiating a busy shopping mall—the participants’ eyes were barely on the screen. However, the video resolution on the mobile’s screen wasn’t good enough for detailed analysis of usability issues. This technology will evolve and provide a means of assessing mobility and attention usability issues.

Barbara Ballard has noted that field testing is most useful late in the design cycle when the most obvious usability issues have been discovered in the lab. One of the key benefits we found in our limited field testing is in seeing how an application will perform under circumstances that you have not imagined. The weather, the light, the number of people around, and the presence of friends all affect how much attention a person is able to give your application. If the mobile website relies on forms, the attention required of people to complete tasks in context is vital information.

Testing with different handsets

Our approach in usability testing mobile websites was to always try to minimize the impact of the phone’s form factor, operating system, and browser on the participant’s evaluation. While using the participant’s own handset might seem the ideal way to ensure that the participant is not learning a new keypad layout or text entry method, two issues stopped us from doing this:

  • Not all participants had mobile internet connectivity enabled on their phone (prepaid accounts in Australia typically do not include mobile internet access). Some did, but people were unsure or anxious about the cost of the service on their mobile phone plan.
    The time required to build prototypes that rendered correctly across a range of phones. We used HTML prototype pages which we built ourselves even at early stage testing. Doing this manually to cover numerous handsets has a dramatic impact on the cost of usability testing.
  • Our approach was to use two different handsets in a session, based on site statistics for handset usage. This approach is not ideal. As mobile internet access becomes more mainstream, it may be increasingly possible for participants use their own handsets, eliminating the impact of learning a new phone’s peculiarities during usability testing.

Designing Flow Getting the flow right

In Yellow or Whereis mobile, launching the search, or searching for a location, is not necessarily the end of the interaction with the form. People launched an average of 1.8 searches per session in Yellow mobile, and getting directions in Whereis mobile would entail revisiting the form at least twice. Allowing people to help users recognize, diagnose, and recover from errors is important, as is trying to design for “expert users” (like removing redundant tip text from the interface).

Evaluations indicated the ideal interaction flow could be: launch quickly, then assess, then modify if necessary. A good model for this is the Google search application that can be installed on Nokia devices (Figure 9). (The application can be found on the Google mobile homepage at http://m.google.com/.) The Google mobile web search interface can be launched with a single button press, which can be much quicker than launching the phone’s browser and entering a URL or selecting a bookmark. The search application then launches the browser, establishes an online connection, and returns a results page. This can take some time, but no further action is required, making it easier to launch a search.

picture of cellphone and a usability test
Figure 8. Attempting to record the screen and keyboard while allowing some natural movement.
screencap of web search form
Figure 9. Launch quickly, assess, modify.

Getting Better All the Time

Designing forms for the mobile web provides some unique challenges, including requiring people’s attention to use forms in context, how phone browsers render forms, the difficulty of text-entry on the phone, and how well the website enables recovery from errors.
The iPhone, Blackberry, and other devices are addressing some of the issues that are beyond the mobile website designer’s control. Lab and field-based testing present some challenges, particularly capturing the environmental, on-screen and physical context of people using it. Communicating to stakeholders the difficulty of capturing these elements can help, particularly as mobile recording setups mature. And finally, getting site statistics that capture detailed form inputs and pathways is critical to ongoing improvement to the mobile website using forms.

Designing for Peace of Mind: Almost Getting to Flow

What do Twitter and the game Angry Birds have in common? Their users are intrinsically motivated to use them. This motivation is important for both apps and sites from a revenue generation perspective: the frequency and duration of use of Twitter supports its advertising revenue, and the engagement of Angry Birds players leads to advertisement revenue and higher conversion of in-app purchases. Even content-oriented apps and sites are concerned with customer engagement in order to generate an experience that draws return users.

The use of Twitter and Angry Birds promotes what Mihaly Csikszentmihalyi calls “optimal experience,” or “flow”. He describes flow as “being completely involved in an activity for its own sake. The ego falls away. Time flies. Every action, movement, and thought follows inevitably from the previous one, like playing jazz. Your whole being is involved, and you’re using your skills to the utmost.”

Activities as varied as playing music, rock climbing, and scientific investigation have been described in this way. While this state of mind is extremely personal, it is facilitated by participation in activities that have several characteristics:

  • A clear goal
  • Sufficient skill and confidence to complete a task that leads to the goal
  • Ability to concentrate in the moment
  • Immediate feedback
  • Control over your actions

In this state we lose a sense of time and a sense of self-consciousness. We also feel happier, more motivated, and more productive.

Promoting flow is a great goal for the UX designer, and the right design can support this zen-like state for many activities. However, a lot of applications are not used long enough to promote flow, or are used in distracting environments such as a call center or a warehouse, in which designing an application for flow would prove difficult. A better place to begin is with the user’s peace of mind, or a sense of confidence, assurance, and absence of frustration in using the application to achieve his or her goal. When we can’t design for flow, we can use the following flow design principles to create a design that promotes peace of mind for the user, ultimately leading to an increase in productivity and system usage:

  1. Things should work as expected
  2. The user should always know how things are going
  3. Interaction should be distraction-free

Things Should Work as Expected

Users bring expectations to every experience. When these expectations are met with elegance, users respond with delight. When something doesn’t work as expected, they get distracted and may become anxious or frustrated. Distraction makes users lose focus on their current task and reduces their peace of mind. As designers working to create an optimal experience, we should try to avoid distracting our users.

It is surprising how often applications disregard even the most basic expectations regarding feedback. Jakob Nielsen spelled out some of these expectations more than twenty years ago: an application should respond within one second to avoid interrupting a user’s flow of thought; users will become distracted and begin to perform other tasks if an application’s response time exceeds ten seconds. Others have added to this, suggesting that users expect to be able to complete sub-tasks in less than a minute and an entire task in less than ten minutes.

The “busy” indicator in web applications is a good example of timely feedback that reassures users and facilitates peace of mind in using the application. This indicator should be used for any interaction that will take longer than one second to complete.

Another set of expectations that people have is dependent on the context of the application in use. Web standards have come a long way in supporting usage context at a high level, and conventions such as the placement of login and logout links in the upper right corner meet expectations also. However, simply following standards and conventions does not necessarily promote users’ peace of mind. We have to understand the overall context and support specific contextual expectations.

The LinkedIn iOS app has a view that lists the people who have recently visited your profile (see Figure 1). There is a button on the far right of some of the people on the list. Selecting the button sends an invitation to connect to those the user is not already connected to. When the button is tapped, there is no confirmation pop-up. It is a smooth, distraction-free experience. But there is also no way to revoke the invitation. Because I’m not a big fan of inviting strangers to my LinkedIn list, I expect that there would be a way to undo my mistake. Since there isn’t, my confidence in using the app diminishes, and I am less likely to use it.

LinkedIn mobile app screenshot
Figure 1. Auto-invite in LinkedIn’s iOS mobile app.

There are a few questions a designer can ask to ensure that an application meets its users’ expectations:

  • Does the app support natural human expectations of feedback?
  • What feedback will promote a user’s peace of mind?
  • What are the contextual expectations users’ bring to the app?
  • How can the app promote confidence in this context?

The User Should Always Know How Things Are Going

Another characteristic of flow is the understanding of the current goal and how we are progressing toward it. An indication of progress promotes confidence in the use of an application, which also supports a user’s peace of mind.

Navigation in software applications has been a topic of investigation for many years. Breadcrumbs, clear labels, and proper navigational elements all contribute to a solid information architecture and positive user experience. However, simply following all of these best practices may not promote confidence in the use of an application, or help the user focus on the task at hand.

Consider the form used to list an item for sale on eBay (see Figure 2). It’s a one-page form which works well in some applications. There are breadcrumbs at the top of the page and navigation elements are clearly labeled. Unfortunately, the form is long and users can lose sight of where they are in the listing process. By simply numbering the form element groups and indicating the total number of steps (for example, “2 of 4”), users would know where they are and how much further to go.

Ebay item listing screenshot
Figure 2. The item listing form on eBay.

Contrast this with the wizard used to purchase a plane ticket on Delta.com (see Figure 3). The application clearly spells out that there are five steps in the process. There is an indicator at the top of the page that provides feedback to reassure the user of their progress toward the goal of booking a flight. This design pattern has been around for years. It may not be appropriate for every design, but the navigational cues that it provides help users know where they are in the process.

Delta flight booking screenshot
Figure 3. The flight booking wizard on the Delta website.

There are three questions a designer can ask to ensure that users know how things are going when they use an application:

  • What are the users’ goals and what tasks support these goals?
  • Can users always know how things are going and what to do next?
  • What indicators will promote the users’ peace of mind?

Interaction Should Be Distraction-Free

Nothing interrupts flow more than distractions. When you are focused on completing a task and the phone rings, isn’t there something inside of you that is perturbed? Web and mobile applications can interrupt flow in the same way with unexpected distractions.

Distractions come in many forms. Imagine using Facebook or Twitter without the infinite scroll feature. Imagine that after every ten updates you had to hit a “Next Page” link to see more. I doubt you would enjoy the experience as much. The free flow of updates is one aspect of the interface that draws people in and keeps them connected for a period of time.

A continued source of distraction is the confirmation pop-up. When a user purposely hits a button, the expectation is that it just works. A confirmation pop-up distracts from the flow of the experience. The RSS aggregator Feedly displays a confirmation pop-up when the “Mark As Read” button is clicked (see Figure 4). An undo feature would be better in this case; it would facilitate the flow of the interaction and help the user feel comfortable trying out new features without the fear of failure.

Pop up confirmation screenshot
Figure 4. A confirmation pop-up on Feedly.

Another potential distraction is a request for uncommon information. Some applications may require the user to enter information that is not readily available, such as an account number or social security number. For example, an electric company had a change of address form that requested the user’s account number. Site usage analytics showed a substantial loss of conversion when users were asked for this information. When the company added a list of required information at the top of the form to give users the opportunity to gather everything (see Figure 5), the conversion rate increased by over 300 percent.

List shows what a user needs to have to sign up for service
Figure 5. Example of a list of information the user will need before beginning a long task.

There are a few questions a designer can ask to ensure that the user interaction is distraction-free:

  • Are there any interactions that prevent users from having a clear path toward completing their goal?
  • What is the cost of failure when users accidentally activate a feature? Will they want to undo it?
  • Does the application ask for information that users may not have readily available? Would the lack of this information interrupt the flow of their experience?

Conclusion

An application that promotes the concept of flow is a great goal. However, one must spend at least fifteen minutes performing an activity to get into a state of flow, and the best experiences for many applications are completed in ten minutes or less. Therefore, specifically promoting flow may be impossible.

By applying a few of the flow principles, however, we can increase peace of mind and engagement. For work process products, peace of mind can optimize workflow and increase throughput. When consumer applications improve engagement, people are more apt to use them longer, which leads to an increase in revenue. While your product may not be the next Twitter or Angry Birds, incorporating a few of the flow principles in the design will lead to a positive experience.追求流畅感对用户体验设计人员来说是一项重要目标。但是,当这一目标由于界面性质或其使用环境而难以实现时,设计人员应该以让用户“安心无忧”(充满信心并且没有挫折感)为目标。这可以通过运用以下三条“流畅感”设计原则来实现:

  1. 流程应按照用户预期运行。
  2. 用户应该始终知道流程的运行状态。
  3. 交互应避免分散注意力。

文章全文为英文版몰입의 촉진은 UX 디자이너에게 대단한 목표입니다. 그러나 이러한 목표가 인터페이스의 속성 또는 사용 맥락으로 인해 도달 불가능할 때, 디자이너는 사용자의 “마음의 평안(자신감과 좌절감 제거)”에 목표를 맞춰야 합니다. 이는 다음의 세 가지 몰입 원칙을 적용함으로써 가능할 수 있습니다:

  1. 일이 예상대로 효과가 있어야 합니다.
  2. 사용자는 항상 상황이 어떻게 돌아가는지 알고 있어야 합니다.
  3. 상호 작용 시에는 주의 산만 요소가 없어야 합니다.

전체 기사는 영어로만 제공됩니다.Promover Fluxo é uma grande meta para os designers de UX. Entretanto, quando esta meta não pode ser atingida devido à natureza da interface ou de seu contexto de uso, os designers devem ter como alvo a “paz de espírito” dos usuários (um senso de confiança e ausência de frustração). Isso pode ser feito aplicando-se os seguintes três princípios de fluxo:

  1. As coisas devem funcionar de acordo com o esperado.
  2. O usuário deve estar sempre ciente do andamento das coisas.
  3. Na interação, não pode haver distração.

O artigo completo está disponível somente em inglês.「フロー」の促進は、UXデザイナーの大きな目標である。しかし、障害となるものの性質や使用状況が原因となってこの目標が達成不可能である場合、デザイナーは、ユーザーの「安心」(確信があり、不満がない感覚)を目的とすべきである。これは、以下の3つの「フロー」の原則を適用することにより達成することができる。

  1. 物事は期待どおりに作用すべきである。
  2. ユーザーは進行状況を常に把握しているべきである。
  3. 相互作用には心の動揺があってはならない。

原文は英語だけになりますPromover una plena inmersión del usuario es un excelente objetivo para los diseñadores de UX. No obstante, cuando este objetivo es inalcanzable dada la naturaleza de la interfaz o su contexto de uso, los diseñadores deberían centrarse la “tranquilidad” de los usuarios (una sensación de confianza y ausencia de frustración). Esto se puede lograr mediante la aplicación de los siguientes tres principios:

  1. Las cosas deberían funcionar como se espera.
  2. El usuario siempre debería saber cómo va todo.
  3. La interacción debería estar libre de distracciones.

La versión completa de este artículo está sólo disponible en inglés

Contradictions and Implications: Making Sense of Apparently Irrational Behavior

A stylized X-ray of a human skull with the brain highlighted

Sometimes people make up their minds about certain aspects of technology when they hear about or see it, before they physically interact with it. How likely are users to change their minds after they interact with the technology, particularly when they find their original opinion was incorrect? Are certain aspects of technology more or less influenced by subsequent physical interaction? If so, are there thematic differences between such aspects? This article explores these questions and discusses implications to user testing where, even in the face of serious usability failings, users do not change their original opinion.

Conflicting Stories

As a participant in a large research study, Betty was interviewed about her impending purchase of an MP3 player. The interviews were timed to be pre-purchase, immediately post-purchase, and a few weeks later. Betty was a university undergraduate and a drummer, and felt that music was very much a part of her life, saying, “Music is what I’m about.” When asked which MP3 player she was going to buy, she said, “A lot of people that I know, their iPods have broken. So that’s why I decided to go for a different brand.”

During the post-purchase interview, however, she revealed that she actually ended up getting an iPod, explaining, “It’s so easy to use, has good software, and is cool…it’s quite stylish as well.” During our third interview a few weeks later, we again asked her about her iPod. She reported, “It broke.” Worse still, she had a horrible time being bounced between the manufacturer and the supplier, and finally returned it for a refund. Betty concluded that she had an overall negative experience. Then we asked if she would recommend it, and Betty answered, “Yes, I would recommend it. As I said, it’s cool, easy to use, it’s a good MP3 player. It is good, it is good.” Her opinion clearly conflicted with her actual experience.

In contrast to Betty’s story, participants in another study were asked to view a full-size color photo of an MP3 player, rating it on aspects such as ease-of-use and understandability. Then participants interacted with the actual device and rated it on the same aspects. One participant, Kate, initially rated the MP3 player in the photo as very easy to use. When Kate was given the actual device, her three-minute experience unfolded as follows: “It’s quite chunky, but looks quite interesting…You can fit it on a key ring…How do you switch it on?…This is actually quite hard to figure out how to use…I don’t like it, you can’t really navigate through it…[shaking head] I don’t understand it…I give up!” Kate had never heard of the brand (iRiver) and summarized her experience with this device as “strange,” and used words like “confused” and “complicated” during the post-interaction interview.

In the same study, participants reviewed a device by a well-known manufacturer (Sony) and found signs of low usability and construction quality issues. One participant, Steve, said, “The hold function is quite cool, although I imagine it would break quite easily… seems liable to come off when you don’t want it to.” After interacting with this device, participants didn’t want to give it low ratings, suggesting they overlooked their difficulties. They concluded by saying it was a “good” MP3 player.

What’s Going On?

One way to understand Betty’s behavior is to attribute it to her motivation to reduce cognitive dissonance in which a person holds conflicting views. This analysis, however, would be less than helpful to a researcher running a usability study, and it could mask valuable data.

In this example, additional questioning around other aspects of Betty’s life revealed pertinent information. It turned out that she enjoyed sharing music with a close friend who received pre-release music. Because her friend set up his gadgets in a way she thought made it easy to share music via an iPod, she was highly motivated to get an iPod. For her, having early access to pre-release music correlated with her strong sense of personal identity with music—“Music is what I am about.”

The contrast between the Sony and iRiver devices gives us interesting clues. Participants seemed to be less tolerant of the iRiver device than of the Sony device. This is possibly because most of the participants had not heard of iRiver, while all had heard of Sony and considered it a “good and reliable” brand.

We believe it may be easier for users to continue with the same opinion rather than change it, even when faced with contradictory evidence. People have an intricate web of implications built into their opinions, and changing one could mean revisiting and possibly modifying subsequent ones. Based on Kelly’s Personal Construct Psychology, researcher Dennis Hinkle suggested that the more meaningful aspects of people’s experience are those that have the most implications. Anticipations with deeper ranging implications may be more likely to influence user preferences and ratings. In the above examples, Betty is fighting a conclusion not to buy an iPod because the implications of not getting one jeopardize an activity that is important to her. On the other hand, users who have not yet formed a consequential opinion about an iRiver device might easily change their opinion.

Paradoxes, such as Betty and Steve maintaining opinions that ignore experiential evidence, should always warrant further research. Ultimately, we found that these paradoxes and resistance to change were related to a network of implications. Episodes like Betty’s story provide a researcher with important clues towards discovering how a user is experiencing not just their technological world, but also their position within this world. This directly influences how the user will experience and judge the artifact during the usability test.

When Does Interaction Really Occur?

In these cases, interaction took place long before the research study: seeds of expectation were sown during the early experiences with different brands—both well-known and not—beginning with indirect experiences that may include watching other people use the items, or viewing television advertisements. If a user’s expectations and anticipations are not uncovered in the recruiting screeners, or if usability studies do not explore the implications behind opinions given during a testing session, unexplained contradictions may occur in usability findings.

Uncovering the experiences and expectations people have before they actually use the technology in the usability test is just as important as exploring the physical interactions that occur during a test session. We used semi-structured interviews to explore participants’ relevant experiences, and we paid attention to both direct and indirect experiences. I queried Betty further, saying, “This is really interesting, and seems like a contradiction. What do you think might be going on?” Suddenly Betty and I were looking for answers together, rather than her feeling that she was being interrogated. By positioning it this way, we invited her to be a co-explorer, uncovering the deeper motivations behind her opinions. We recommend that questions about a user’s pre-interaction experience become part of the experimental protocol.

Conclusion

Many factors influence user experience. One important factor is that what is usable is also a matter of implications. Users seem to forgive bad design if the implications of calling it “bad design” have multiple repercussions. The above stories show how users can ultimately forgive design trespasses if changing their perception about the brand has deeper implications for their self-perception. Further studies are needed to determine the threshold beyond which consumers will finally revolt against bad design.

Our research suggests that we should always explore why users seem to forgive bad design. Our findings become richer when we examine seeming contradictions. By maintaining a holistic view of user experience, we gain a fuller understanding of how people experience technology, and enrich our model by highlighting implications, self-identity, and convenience as core themes in user experience.为什么向朋友推荐不完美的产品可能是合理的?移动媒体播放器的用户研究揭示的一些事例会让我们看到一系列蕴含关系,表明此类行为是完全理性的。同时我们发现,如果只参考可用性研究中的定性或交互数据,而不去研究参与者的历史,可能造成有误导性的狭隘观点,并丢失丰富的研究结果。

文章全文为英文版결함이 있는 제품을 친구에게 추천하는 것이 왜 이치에 맞는 것일까요? 휴대용 미디어 플레이어에 대한 사용자 조사 세션에서의 몇몇 예를 보면 이러한 행동이 완전히 이성적이라는 것으로 보이게 하는 일련의 연관된 관계성을 볼 수 있습니다. 즉 참가자의 이력을 살펴보지 않고 사용성 연구의 정성적 또는 상호작용 데이터에만 격리된 주목을 하게 되면 오해를 할 수 있는 좁은 관점을 가지게 되어 풍부한 연구 결과를 놓칠 수도 있다는 사실을 우리는 알게 되었습니다.

전체 기사는 영어로만 제공됩니다.Ao olharmos alguns exemplos de sessões de pesquisa com usuários explorando tocadores de mídia portáteis, é possível destacar uma série de implicações que mostram esse comportamento como sendo completamente racional. Descobrimos que analisar isoladamente os dados qualitativos ou de interação em um estudo de usabilidade, sem explorar o histórico de um participante, fornece uma visão limitada que pode ser enganosa, além da possibilidade de perder importantes descobertas.

O artigo completo está disponível somente em inglês.欠陥商品を友人たちに薦めることがなぜ、理にかなっている可能性があるのか。携帯メディアプレーヤーに関するユーザ調査のいくつかの事例から、そのような行動が実は全く合理的なものであると示すことができる。調査対象者の履歴を考慮せずに、ユーザビリティに関する定性的データやインタラクションデータを分析することは、視野を狭めて誤解を招いたり、貴重な発見の可能性を見逃してしまったりすることにつながると分かった。

原文は英語だけになりますAl revisar algunos ejemplos de sesiones de análisis de usuarios navegando en reproductores multimedia portátiles, es posible resaltar una red de implicancias que identifican dicha conducta como algo totalmente racional. Hemos hallado que el hecho de tener una visión aislada de los datos cualitativos o de la interacción en un estudio de usabilidad sin explorar el historial de un participante, arroja una mirada acotada que puede ser errónea y que puede llevar a dejar de lado conclusiones significativas.

La versión completa de este artículo está sólo disponible en inglés.