Skip to content Skip to footer

Search Results for: interaction – Page 3

Not So Strange, These Fictions 
(Book Review)

Book CoverA review of
Make It So: Interaction Design Lessons from Science Fiction
By Nathan Shedroff & Christopher Noessel
Rosenfeld Media, 2012

A miniature Darth Vader appears as a cockpit projection. “For the stormtrooper, the miniaturization works well.” However, what is Darth Vader seeing? Does he have to look up at a giant stormtrooper to make eye contact? If so, this will create an unacceptable situation, as it fails to reflect Vader’s status (and ego). And if we adjust the projection to correct for relative scales, how can we manage the problem of gaze-matching (or gaze monitoring)?

While it may be some time before you have to deal with this issue, it’s not inconceivable. And if you work in video conferencing, some of this is relevant right now. Such is the case with much of the content in this book; some of it is relevant to right-now problems; some is likely relevant in the near future; and some of it is just plain interesting.

Early in the book, the authors describe a set of exclusions.
I was initially disappointed to see that they had “decided not to consider interfaces from written science fiction.” They also excluded comic books, graphic novels, and hand-drawn animated interfaces (including anime and the likes of Futurama). So essentially the book presents a survey of interaction design in sci-fi movies, although filmic interfaces that are not sufficiently detailed to deconstruct are also excluded.

The exclusions make sense once one accepts that the book is serious in its intent of learning about interaction design from the world of sci-fi. Throughout, the authors identify “lessons” and “opportunities” derived from the aspect of interaction design being considered.

The authors made a survey of a range of sci-fi movies and shows (many of your favorites make an appearance). Some of these are listed in the accompanying site (www.scifiinterfaces.com), although I could not locate a definitive list.

The first part of the book delves into considerations of different types of interfaces:

  • Mechanical controls, such as joysticks, buttons, and gauges, with examples from Metropolis, Buck Rogers, When Worlds Collide, and Star Trek. Lessons include the use of mechanical controls for fine motor control, and the need to observe gestalt principles.
  • Visual interfaces, with examples from Jurassic Park, Gattaca, Blade Runner, Space 1999, Men in Black, and The Matrix. This is a wide-ranging section, with discussion of color, layers, and transparency, and various presentations of file management systems.
  • Volumetric projection, with examples from Star Wars, Logan’s Run, Total Recall and Forbidden Planet. This is the section identifying the challenge of enabling Darth Vader to maintain his status and relative position.
  • Gesture, with examples from (the original) The Day the Earth Stood Still, Avatar, Iron Man 2 and, of course, Minority Report. The authors identified seven gestures that were common across their database of movies––wave to activate, push to move, turn to rotate, swipe to dismiss, point or touch to select, extend the hand to shoot, and pinch and spread to scale. Examples of each gesture are included and described. This is followed by a discussion of direct manipulation, from which the authors derived the lesson, “Use gesture for simple, physical manipulations, and language for abstractions.”
  • Sonic interfaces, which includes discussion of ambient and directional sound, voice interfaces, and music interfaces (Close Encounters of the Third Kind, Barbarella, and Dune).
  • Brain interfaces, with examples from Buck Rogers, The Matrix, and Dollhouse, explores invasive and non-invasive brain interfaces, worn devices, implants. This includes the rather bizarre lesson, “Let the user relax the body for brain procedures.”
  • Augmented reality, with examples from Terminator 2, Robocop, and District 9, discusses the use of heads-up displays (HUDs), and issues associated with location awareness, focus, and the use of peripheral vision for non-essential information.
  • Anthropomorphism, with examples form Battlestar Galactica, Until The End of the World, and Alien, considers the way that we tend to anthropomorphize in any case, how this has been used in various movies, and what factors we need to consider when designing for these effects––for example, “Achieve anthropomorphism through behavior.”

The second part of the book looks at four areas of human activity as treated in the genre: communication, learning, medicine, and sex (Sleeper, Serenity, and Lawnmower Man, from which we learn, “Give users safewords.”).

In all, there are many dozens of lessons. Occasionally these seemed to be rather trivial, traditional, or well established (for example, “Otherwise, avoid all caps.”). But this is a picky complaint, and there is plenty of material that is challenging, fresh, and clearly derived from the interfaces under consideration.

I enjoyed the systematic approach taken by the authors. For example, when considering the typography of GUIs, they reviewed each property in their database and found that sans serif outnumber serif typefaces by 100:1. Similarly, they reviewed the colors of screen interfaces, and found that blue predominated. The range of UI colors is presented in a neat set of histograms.

The book is liberally sprinkled with stills from the chosen movies. This was fascinating in itself (and a ready conversation-starter on the tram). I’d recommend the hard copy over the online versions from the point of view of using the book to wander through the rich pastures of sci-fi. Although not coffee table format, it will live rather comfortably there.

Recently, I watched the original Total Recall. The scene in which Lori (Sharon Stone) plays virtual tennis has aspects that are still in the realms of sci-fi. While we don’t have the volumetric projection shown in the movie, we do have the gestural recognition in the form of the Wii or Xbox. And although many of the interfaces from sci-fi appear quaint or overblown, others bleed into the real world, or remain compellingly convincing––for example, when Klaatu controls his ship computer by waving his hand in the 1952 movie The Day the Earth Stood Still.

We UX geeks don’t really need an excuse to watch, read, or otherwise consume science fiction, but it’s nice to have a book that not only encourages us to do so, but also enables us to consider what we know in a new context, and learn new things from a consideration of the genre.Gerry Gaffney 评论由 Nathan Shedroff 和 Christopher Noessel 合著的“实现构想:交互设计可以从科幻小说学到什么”(“Make It So: Interaction Design Lessons from Science Fiction”) 以及它对设计机械控制、可视界面、立体投影和姿势交互的启示——通过科幻小说中的示例,活灵活现地展现这一切。

文章全文为英文版Gerry Gaffney는 Nathan Shedroff와 Christopher Noessel의 “덕분에: 과학소설(SF)에게서 받는 인터랙션 디자인 레슨(Make It So: Interaction Design Lessons from Science Fiction)”과 기계적 제어(PUI) 디자인, 시각 인터페이스, 체적 투사 및 제스처에 있어서의 그 함축적 의미 – 모든 것이 SF 문학에서의 예를 통해 생생하게 구현되는 – 등에 대해 얘기한다.

The full article is available only in English.Uma análise de Faça assim: Lições de design de interação da ficção científica por Nathan Shedroff & Christopher Noessel
As implicações para o projeto de controles mecânicos, interfaces visuais, projeção volumétrica e gestos – tudo ganha vida por meio de exemplos na literatura de ficção científica.

O artigo completo está disponível somente em inglês.Gerry Gaffneyが、Nathan ShedroffとChristopher Noessel共著の「Make It So: Interaction Design Lessons from Science Fiction」に関し、機械的制御の設計、ビジュアルインターフェイス、量的予測、ジェスチャーについて、サイエンスフィクションの文献の例を引用しながら、分かりやすく説明する。

原文は英語だけになりますUna reseña de Make It So: Interaction Design Lessons from Science Fiction (Hazlo Así: Lecciones de Diseño de Interacción desde la Ciencia Ficción) por Nathan Shedroff y Christopher Noessel
Consideraciones para el diseño de controles mecánicos, interfaces visuales, proyecciones volumétricas y gestos, todo llevado a la práctica con ejemplos de la literatura de ciencia ficción.

Personalization Services: Creating Accessible Public Terminals

It is increasingly difficult to ignore or avoid technology. In a “self-service” economy, the use of Public Digital Terminals (PDT) such as Automated Teller Machines (ATMs) for banking or Ticket Vending Machines (TVM) while traveling is often a necessity. As a result, those who cannot access and use these information technologies are increasingly at a distinct economic and social disadvantage

Accessibility Barriers 
in ATMs and TVMs.

Traditionally, ATMs and TVMs can present accessibility barriers to consumers. Users with visual disabilities may rely on audio output, but not all ATMs currently have this facility. Those ATMs that do incorporate some audio accessibility support rely on the user to activate those features using particular keypad combinations that are not consistent from one machine to another. Those with low-vision may require a larger text size. People with dyslexia may require a custom color combination in order to more easily read text on the screen. Additionally, older users may feel uncomfortable using information technology and shy away from using digital terminals for their transactions. At the moment, no solution has been developed to support these groups.

Survey Results of Barriers Experienced by Users

The complex nature of ATMs and TVMs, which often offer users an array of options delivered by any number of interface designs, can leave many users feeling overwhelmed. In our own 2011 survey of nearly 300 people in Spain and Germany, the existing digital terminals frequently presented a number of barriers to users, including:

  • Difficulties handling cards and tickets and operating card slots
  • Unclear interface design
  • Differing user interfaces from terminal to terminal
  • No options for customization of functionality
  • No captioning of video or text transcript for audio information
  • Lack of sign language content
  • Limited voice output options
  • Lack of graphics to increase understanding of text
  • No alternative to touch screens or keypad
  • Lack of clear text prompts or instructions to complete procedures

In our experience, and partially because of these barriers, people with disabilities often must rely on direct human interaction to ensure that their transactions are successful. Beyond the economic burden that this reliance implies for industry service providers, it impacts a person’s autonomy and their right to have equal access to basic services.

It is these barriers in particular that the APSIS4all (Accessible Personalized Services In Public Digital Terminals for all) consortium aims to overcome. Anyone can take advantage of the personalization features, but it is people with disabilities, linguistic difficulties, or low digital literacy skills that benefit most from the solutions.

Personalization to Create Accessibility

Led by Technosite in Spain, APSIS4all is a pan-European consortium of industry partners, research institutes, and organizations that represent disability groups. The goal of the APSIS4all project is to make digital terminals such as ATMs and TVMs more accessible and usable through personalization.

Vendors today are keen to push customers towards self service. The involvement of industry partners is key to the successful delivery of the APSIS4all solution. The Spanish bank la Caixa has installed 800 fully-accessible ATMs throughout Barcelona and Madrid. In Germany, TVM manufacturer Hoeft & Wessel has just deployed twenty-four machines used by the Padersprinter transport company in Paderborn. Up to 3,000 customers are expected to use them during the six-month pilot period that began in fall 2013.

From the outset of the project we have engaged directly with users to overcome some of the most frequently reported physical and psychological barriers to accessing digital terminals. APSIS4all addresses these barriers by allowing the user to configure the terminal so the display is simplified, or so that it displays only the options relevant to their needs.

APSIS4all not only focuses on overcoming accessibility barriers, but also on delivering an inclusive user experience by enabling digital terminals to adapt their interfaces automatically, according to an individual user’s needs and preferences. The project opens new methods of interaction with public terminals through the user’s smartphone and other mobile devices.

Direct and Indirect Interaction Approaches

APSIS4all implements two different approaches to an inclusive user experience via either “direct” or “indirect” interaction. The direct approach involves providing users with a contact or contactless smartcard that stores their needs and preferences (see Figure 1).

Person in wheelchair with smartcard.
Figure 1. The direct interaction approach provides users with a contact or contactless smartcard that stores their needs and preferences.

The individual user accesses a web interface (see Figure 2) that allows them to identify their particular needs and preferences. The web interface guides them through a process to define and customize how a terminal presents their information. This information is stored using international standards, which facilitates the sharing with different service providers and systems. One relevant standard is EN 1332.

Screenshots from the web interface.
Figure 2. Screenshots from the web interface used to collect user needs and preferences.

Once the users get to a terminal and present their card, the terminal changes the settings based on the stored information to suit their preferences, providing the most appropriate interface available. That way, public terminals automatically adapt to the individual user. Users can activate a range of personalized features such as changing the size of text, setting foreground and background colors, enabling audio output, adding sign language avatars (see Figure 3), or adding help content to support their interaction with the terminal.

ATM interface with signing avatars.
Figure 3. ATM interface with signing avatars.

The “indirect” approach shifts the personalization of the terminals to the Internet, so users can pre-set the tickets they will purchase from a TVM from any computer or smartphone. The user requests the desired service—for example, a bus ticket to Paderborn—and the system generates a unique identifier, such as a 2D-barcode or a security code, which is transmitted to the customer’s mobile phone. Finally, the user presents the 2D-barcode to the terminals’ barcode reader or they enter the security code in the terminal keypad to obtain their desired service (See Figure 4).

Diagram of the interaction
Figure 4. The indirect interaction approach uses the Internet to generate a unique identifier code, such as a 2-D barcode, that the user displays at the terminal.

The key to the APSIS4all solution is simplicity; activating the personalized interface requires, at most, a minimal gesture such as touching a reader with a contactless smartcard or presenting a 2D-barcode. APSIS4all also foresees enabling and increasing multichannel interaction with the terminals using Near Field Communication-enabled (NFC) smartphones. This would dramatically increase the flexibility and convenience of the process for users. More importantly, it allows the industry service provider to engage directly with the user through a device that so many people, especially those with special needs, already own.

Industry Drive towards Personalization Services

Optimizing the user experience through personalization is a key strategy by industry service providers to engage customers and ensure that people of all needs have access to the current information technology encountered in their daily activities.

There is the increasing drive in the banking sector to optimize the use of ATMs for the most common services such as cash withdrawals. Banks also want to deliver a more diverse range of services through ATMs; services that customers would normally complete at a brick and mortar branch, such as passbook updates and pre-approved loans.

The reason behind this drive is to improve efficiency and cut costs. But if many users cannot, or are unwilling to, use the ATM in the first place, this drive could ultimately be in vain. Similarly, if passengers can use TVMs effectively, APSIS4all could make public transportation more appealing and increase frequency of use. This would both increase the independence of those with disabilities and reduce the operating costs of the service provider. With the increasing drive towards a self-service society, it is not difficult to envision the potential benefits of automated personalized services in other sectors such as e-government, healthcare, and leisure and entertainment.

Pilot Study of ATM Users
 Using Personalization

So what about the users? Initial results from a pilot study by users of la Caixa ATMs in Spain suggest that not only can users access the terminals, but the subsequent user experience is ultimately effective since they can achieve their desired action. In contrast, experience prior to the APSIS4all solution was, in some cases, unsuccessful.

During trials we measured a range of attributes, such as learnability, ease of use, and satisfaction, with 250 users across the range of user groups who will benefit from the APSIS4all solution. After collating the results, we compared the overall user experience of the traditional ATM interface with the interface users were presented with after completing the personalization tool.

As Figure 5 details, the user experience of existing ATMs for some user groups—particularly those with low or no vision, and those with motor disabilities—is particularly poor. However, by supporting personalization, the subsequent user experience of terminals improves for all groups, sometimes dramatically.

Chart showing improvement after personalization.
Figure 5. In a trial study, the user experience at ATMs improved after people with disabilities used APSIS4all personalization.

Optimizing the UX for All

We can see that personalization does lead to an enhanced user experience for users of the APSIS4all solution. As consumers increasingly expect personalization, perhaps the same will apply to a much broader demographic. The final results from the pilot programs will tell us more, but by empowering consumers, enabling increased access to services, and better control over how digital terminals deliver their functionality, service providers may find themselves becoming more desirable to new customers and gain a greater degree of loyalty from their existing clients.

There is a debate within the UX community (and society at large) on the impact of personalization and its effect on privacy. For the participants in the APSIS4all trial, no matter what country they came from, no matter what their specific requirements, personalization became a fundamental path to basic services of daily living.UX

Acknowledgement

APSIS4all is partially funded under the ICT Policy Support Programme (ICT PSP) as part of the European Commission’s Competitiveness and Innovation Framework Programme (CIP) by the European Community (2010-4 Project 270977). Duration: April 1, 2011 – March 31, 2014

This publication reflects only the author’s views and do not necessarily reflect the views of the European Commission. The European Community is not liable for any use that might be made of the information.

[bluebox]

More Reading

Learn more about international coalitions and initiatives of individuals and organizations working to ensure that the Internet, and everything available through it, is accessible to people experiencing barriers due to disability, literacy, or age.

Other articles in User Experience magazine about this topic include an early article about the Raising the Floor project, the original inspiration for Cloud4All, and the GPII.

[/bluebox]APSIS4all 项目(适用于所有人的可达的个性化公共数字终端服务)是由某个欧洲协作组织建立的,其目的是克服残障人士在使用自动柜员机和售票机时存在的访问障碍。 通过使用个性化服务,APSIS4all 让终端能够根据用户需求和偏好自动调整界面。 它还提供了通过智能手机等移动设备实现的各种新型交互模式。

文章全文为英文版현금자동지급기와 티켓 판매기를 사용할 때 사람들이 직면하는 기존의 접속 문제점을 극복하기 위해 유럽 컨소시엄이 APSIS4all 프로젝트(모든 사람들이 사용할 수 있는 공용 디지털 단말기의 맞춤형 서비스)를 마련하였습니다. 맞춤형 서비스를 사용함으로써, APSIS4all은 단말기들이 사용자의 요구와 선호에 따라 자동으로 인터페이스를 조절할 수 있게 합니다. 또한 스마트폰과 같은 모바일 기기를 사용하여 다양한 새로운 인터랙션 방식을 이용할 수 있게 합니다.

전체 기사는 영어로만 제공됩니다.O projeto APSIS4all (Serviços de Acessibilidade Personalizados nos Terminais Públicos Digitais para Todos) foi criado por um consórcio europeu para superar as barreiras de acessibilidade existentes que pessoas com deficiência enfrentam ao interagir com caixas eletrônicos e máquinas de venda de bilhetes. Utilizando serviços de personalização, o APSIS4all permite que os terminais adaptem suas interfaces automaticamente de acordo com as necessidades e preferências dos usuários. Ele também abre uma variedade de novos modos de interação através de dispositivos móveis, como smartphones.

O artigo completo está disponível somente em inglês.APSIS4all (Accessible Personalized Services In Public Digital Terminals for all:すべての人に公共デジタル端末を利用できるようにする個人化サービス) プロジェクトは、障害者を含む多くの人々がATMや切符販売機を操作する際に直面するアクセシビリティのバリアを克服するため、欧州共同事業体によって設立された。APSIS4allは、個人化されたサービスの使用により、ユーザーのニーズや好みに端末のインターフェースを自動的に適合させることができる。またスマートフォンをはじめとする様々な新しいインタラクションモードにも対応を拡大している。

原文は英語だけになりますEl proyecto APSIS4all (Accessible Personalized Services In Public Digital Terminals for all, Servicios personalizados y accesibles en terminales digitales públicas para todos) fue creado por un consorcio europeo para superar las barreras de accesibilidad existentes que enfrentan las personas con discapacidades cuando interactúan con cajeros automáticos y máquinas de venta de billetes. Mediante el uso de servicios de personalización, APSIS4all permite que los terminales adapten sus interfaces de manera automática según las necesidades y preferencias de los usuarios. También posibilita una gran variedad de nuevos modos de interacción usando dispositivos móviles, como los teléfonos inteligentes.

La versión completa de este artículo está sólo disponible en inglés

Nielsen’s Heuristic Evaluation: Limitations in Principles and Practice

Recently, I was asked to review the usability of an audio configuration app and propose enhancements in the areas of navigation design, workflows, workspace management, and the overall usability of the application. Mandated by the business to adopt a cost-effective and a rapid evaluation method, I was compelled to choose Nielsen’s heuristic evaluation to assess the product’s ease of use.

Jakob Nielsen’s heuristic evaluation is a quick method to examine a user interface and identify usability issues. Since my prior experience with Nielsen’s heuristics wasn’t praiseworthy, I endeavored to explore other well-established principles like Bruce “Tog” Tognazzini’s First Principles of Interaction Design and Ben Shneiderman’s Eight Golden Rules of Interface Design. Tog’s principles have been around for years and were revised recently to account for the adoption of mobile, wearables, and internet-connected smart devices. They cover discoverability, readability, accessibility, learnability, latency reduction, and Fitts’s Law, which are some of the key principles to attain design success. Shneiderman’s guidelines focus on keyboard shortcuts, which can increase the pace of the interaction and are often a lifesaver for advanced users.

Nielsen’s heuristics have the ability to reveal usability issues, but at the same time they fail to capture some of these important areas, which have stemmed from evolving technologies over time. The problem lies not only with the guidelines themselves, but also in the way the heuristic evaluation is practiced.

What Constitutes Good Qualitative Research?

Well-formulated user research helps businesses drive their future strategies. It is a proven fact that no single research method has the ability to uncover the array of a user’s unmet needs. There are both qualitative and quantitative techniques that are combined to achieve a research goal. Over the years the most debated topic has centered around judging the quality of qualitative research. Researchers have questioned the effectiveness of qualitative study since there is a likelihood of subjectivity and researcher bias. It is a normal practice to have a single interviewer set up a questionnaire, talk to the end users, collect data, and analyse the data. While executing the entire process, the researcher bias gets added into the study. Termed as confirmation bias, a researcher tends to add supporting facts to back his/her own belief. The process is no different in the case of heuristic evaluation. What constitutes a good and trustworthy qualitative study is still a matter of critical debate.

In their paper “Ensuring Rigour and Trustworthiness of Qualitative Research in Clinical Pharmacy,” Muhammed Abdul Hadi and S. Jose Closs offer strategies to further evaluate the quality of qualitative research. They propose methodological techniques that include triangulation, self-description, member checking, prolonged engagement, audit trail, and peer debriefing.

  • Triangulation – A strategy where the intent is to collect two or more related data sources, data collection methods, or researchers to reduce the bias of a single source method or researcher.
  • Self-Description – This enables researchers to explain their position within the study and how their personal beliefs and past training have influenced the research.
  • Member Checking – Validation of data by formal and informal means, analyzing themes and categories, and interpreting and concluding them with study participants
  • Prolonged engagement – A prolonged engagement with the user can help an evaluator bring out important issues. A user cannot learn the entire software in a short time frame. Over time the user becomes an expert and gets a deeper understanding of the entire anatomy of the software.
  • Audit Trails – In qualitative research, audit trails make it possible for others to see how analysts achieved their decisions through point-by-point documentation of every part of the examination procedure.
  • Peer Debriefing – Discussing a research topic with a disinterested peer can help researchers illuminate newer angles of data interpretation and often act as a good basis for identifying possible sources of bias.

The Problem with Nielsen’s Heuristics

When Nielsen developed the heuristic guidelines with Rolf Molich in the 1990s, user interfaces were not considered as critical or complex in terms of navigation, workflows, aesthetics, and layout as they are today. Though it cannot be denied that earlier generation user interfaces suffered major usability issues, they were disregarded by the fact that there was a lack of usability awareness and the interaction problems felt by the user while accessing websites or software applications were never realized and reported. As mentioned by Randolph G. Bias and Deborah J. Mayhew in their book Cost-Justifying Usability: An Update for an Internet Age, enterprises used to put up strong resistance since they felt that usability science was disruptive, expensive, and time consuming. The real need for usability began to be felt as user interfaces became more complex. There was a noteworthy move from building simple static websites to creating dynamic, real-time, data-oriented web applications. At this juncture, the majority of enterprises integrating usability as a practice embraced Nielsen’s principles as a quick method to reveal usability issues. Nielsen’s heuristics continued to gain popularity over time.

However, since these heuristics are one dimensional and were shaped with desktop applications in mind, one of the biggest challenges the guidelines face is scalability. They are less effective with the next-generation design ecosystem where conversational user interfaces, multimodal interfaces, tangible user interfaces, and wearables are taking precedence. These interfaces are built for emerging users with newer interaction rules that tend to solve unique design problems.

In an article “Usability Expert Reviews: Beyond Heuristic Evaluation,” author David Travis points out that Nielsen’s heuristics can be challenged by the fact that these principles, which are widely used, have never been validated. There is no evidence that applying these heuristics in the design and development of a user interface will improve its usability. Dr. Bob Bailey, president of Computer Psychology, Inc., mentioned that a better, research-based set of heuristics was proposed by Jill Gerhardt-Powals. Created in 1996, these rules were planned with reference to how humans process information. In the paper “Cognitive Engineering Principles For Enhancing Human-Computer Performance,” Gerhardt-Powals claims that a cognitively engineered interface is prevalent in performance, satisfaction, and workload when contrasted with interfaces that are not cognitively engineered. Research was carried out with 24 Drexel University students to support the hypothesis.

As discussed in the paper “Discount Usability Testing” from the University of Calgary, heuristic evaluations suffer from oversimplification. Adopted from the original version of the usability evaluation method, which has more than a thousand entries, Jakob Nielsen created an optimized set to make the guidelines simple, easy, and fast — making it generic in nature. This generic nature often misleads designers and never helps them uncover usability issue of a specific nature.

A usability expert can recommend ways to improve the usability of an application using the heuristic evaluation, but measuring user satisfaction post-implementation is a step that is often omitted. This constrains us to live in a state where user satisfaction scores are never integrated into the design process. Usability testing should be tied to the design process to validate whether the heuristic guidelines actually made a difference to the product.

Nielsen heuristics prompt two distinct problem types.

The way the heuristic evaluation is practiced

The main reason the heuristic evaluation is losing viability is a direct result of it not being practiced as articulated by the rule book. Nielsen states that a heuristic evaluation should be a group effort, insisting no individual can examine all the issues and that  “different people find different usability problems.” Due to a shorter timeline and budget constraints, heuristic evaluations are often conducted by a single evaluator and the subsequent researcher bias is completely ignored. The researcher does not describe his/her personal belief system and the setting under which the analysis was carried out. Since there is no shared mental model between the user and the evaluator, certain usability issues identified during the evaluation process may turn out to be false positives. The severity ratings—a combination of frequency, impact, and persistence of a particular interface-related problem assigned by an evaluator—are subjective. The end users might have completely different pain points which are not even realized by the evaluator. An evaluator might find an additional click as a core usability issue, but the end user doesn’t mind since they are used to it and works well for them.

The guidelines prescribed by Nielsen have limitations

  • Visibility of the system status – In this guideline Nielsen talks about keeping users informed about what is going on through appropriate feedback within a reasonable time. Nielsen never stated that status information ought to be precise and simple to see. It is often a case while installing an update that the system shows a false notification about the time it will take to implement the update. (When it says 5 minutes it typically takes more time than that.) Sometimes time stamps are not available and the status bar keeps animating. The interface feedback should be prompt, meaningful, and easily understood so that users know their actions are noted by the system. These key points are lost in this rule. Also, displaying active states in limited real estate is an impossible task. Today, even desktop apps are adopting concealed navigation patterns in the form of the hamburger menu. When the navigation is invisible, so is the active state.
  • User control and freedom – This guideline addresses the “undo” and “redo” actions and a clearly marked “emergency exit” to leave an unwanted state. The guideline is presented in an abstract form with no clear indication as to what extent the user should have control over the system. Should the control be extended to modify the anatomy of the software? Should the control be over the data or over the windows and dialog boxes? The guideline talks about undoing an action, but to what extent is not mentioned. Will an undo command revert a stream of actions or just one action? As stated in a course lecture from MIT, taking the case of performing an undo in an application with various concurrent users—like a common system whiteboard where anyone can scribble—confronts the topic of whether undo should influence just a user’s own actions or everyone’s actions. These intricacies were never thought of and discussed in the guideline. A good interface offers the user a platform to explore. Cancel, undo, and back buttons are essentials in building an interface, but once a user decides which streams of actions to undo or cancel, the next question is how to divide the stream into units or chunks as desired by the user. For example, in a wizard-based navigation, if a user completes a number of steps and finds it is not solving the purpose, should an undo at that moment reverse just the previous action or roll back the entire wizard? The guideline fails to manage such complexities. Take the example of choosing a password. A typical system is tied to certain mandates and guidelines and prompts the user to choose a lengthy and complex password that becomes difficult for a user to memorize. In this instance, the purpose of the guideline is defeated since the system dictates the password rules, leaving no room for freedom to the users. Possibly the guideline requires more expansion on a case-by-case basis.
  • Recognition rather than recall – In this guideline, Nielsen only scratches the surface on the ways to maximize recognition and minimize recalls. He recommends removing visual clutter, building on common UI metaphors, and offloading tasks. But considering the multifaceted nature of interface design, with newer interaction patterns being adopted, metaphors have short stints. Metaphors are either fundamental concepts or they are acquired knowledge from the past, which is then applied while interacting with something new. It is true that people use acquired knowledge of other things while using something else. The classic example is working with a word processor and mapping the experience of using a typewriter. A mobile user who has never experienced a multi-touch gadget won’t suddenly consider performing a double-tap to expand content; it’s not an obvious thing to do and since there is no acquired knowledge, it is hard to recognize. An article titled “Why UX Designers Should Use Idioms Rather Than Metaphors” mentions that since metaphors depend on pre-existing knowledge, it is a challenging task for a designer to create a metaphor from a limited pool of objects and actions. The article further concludes that common metaphors are not permanent. Recognition requires only a simple familiarity decision. When a design is built using familiar items it is widely accepted. However, as we move forward, UI design frameworks are updated with newer patterns, making the lifespan of familiar items shorter. This is particularly true for the emergent users who have not experienced the progression of user interfaces and the evolution of interaction patterns. For them, familiarity is nothing better than a short-lived specimen.
  • Flexibility and efficiency of use – While Nielsen mentions accelerators, keyboard shortcuts—which are a significant way to speed-up interaction and minimize task time—are not indicated in the original guideline. There are many tailored versions of the guideline that possibly address that issue; adding shortcuts is just one aspect. Applying flexibility in the interface needs a broader look. To induce flexibility, a user should have more than one way of doing things rather than following a linear pattern. For instance, there are four different ways to close a modal dialog box. It can be closed either by clicking on the cancel button, the cross icon located at the top right corner of the window, by pressing the escape key, or by clicking outside the modal window. The guideline misses out on the perspective of adding multiple entry and exit points and incorporating non-linearity to make a design flexible and efficient.
  • Aesthetic & minimalist design – This guideline fails to capture the essence of design and lacks depth. The guideline mentions only dialog boxes while a full-fledged interface is beyond a dialog box. The definition of minimalist design is not clearly stated. With each designer interpreting the meaning of minimalist design in their own way, the outcome becomes subjective. Does aesthetic and minimalist design mean an uncluttered interface or a progressive disclosure of information at the cost of an extra click or an additional user input?
  • Human visual system – Nielsen’s principles do not talk about the importance of the human visual system and how it affects interface design. Placing items in a predictable place is a key design consideration. As mentioned in the article “F-Shaped Pattern For Reading Web Content” by the Nielsen Norman Group, an eye-tracking study conducted with 232 users revealed that the dominant reading pattern looks somewhat like an F. However, the study never disclosed the monitor resolution used to conduct the experiment. While users are moving to different screen dimensions for web browsing, sometimes the content becomes scattered or gets dense and the eye travels to unpredictable directions in a way to find the content. In such a scenario, the F pattern is bound to break.

Conclusion

In his 2013 “Mobile Usability Features” lecture at Google, Nielsen stated, “Usability science reveals the facts of the world,” and “You’ve got to design for the way people actually are.” However facts of the world are mutable over time. They are replaced with newer thoughts, visions, and insights. Nielsen’s heuristic principles lack in their ability to adapt to this reality. While the guidelines remain a standard assessment tool, they fall short on scalability and adaptability to the changing ecosystem of design. With the proliferation of smart devices, wearables, and mobile phones, the global design language is being updated with new rules.

The heuristic evaluation is a quick and brief method to reveal usability problems. An evaluator must have domain knowledge, have undergone training on the software that has to be evaluated, and must have clear direction from the business on what is expected out of the study. In evaluating a web-based communication tool for nurse scheduling, a study conducted by Po-Yin Yen and Suzanne Bakken reveals that while usability experts detect general interface issues, end users are the ones who identify serious interface obstacles while performing a task. Therefore, leveraging empathy and establishing a shared mental model between end users and designers holds the key to understanding end-user problems better and routing them appropriately to design better systems.

[bluebox]

How to Conduct a Better Heuristic Evaluation

  • Use multiple evaluators – Involving multiple evaluators will minimize researcher bias.
  • Severity ratings – Severity ratings are subjective. The best way to make them effective is by making use of multiple evaluators and taking the mean average.
  • Understand business goals – Look for usability standards that best suit your product. A combination of multiple standards might be a better approach depending on the type of interface you’re evaluating. Look to Tog’s First Principles of Interaction Design and Shneiderman’s Eight Golden Rules of Interface Design to bridge the gap between Nielsen’s heuristics and the areas in which they’ve fallen short in their adaptation to today’s current design ecosystem.
  • Domain knowledge – Get a solid understanding of the domain before studying a piece of software so that a shared mental model can be established between the evaluator and the end user.

[/bluebox]

 

The Green Machine: Going Green at Home

Thanks to global awareness campaigns such as Al Gore’s An Inconvenient Truth, the problem of global warming and its worrisome threat is no longer in question. However, this information, albeit frightening, does not necessarily invoke change in people’s behavior and their way of life. Two vital issues need to be addressed: how to help people reduce their ecological footprint, and how to persuade and motivate them to change their behavior. Nathan Shedroff lists five approaches of sustainable design in his 2009 book Design Is the Problem: reduce, reuse, recycle, restore, and process. This article considers another important aspect: how to persuade people to reduce their ecological footprint. The objectives of the Green Machine, a mobile phone application that charts personal energy consumption, are to persuade and motivate people to reduce their energy consumption and change their behavior.

Our first concept focuses primarily on household energy consumption, which represents 19 percent of the U.S. total CO2 emissions. However, our approach could be extended in the future to other areas, such as waste and recycling, transportation, shopping, and eating. The Green Machine is intended to build on Smart Grid technology, an important innovation that enables users to acquire instantaenous feedback about their energy consumption. Studies conducted by Sarah Darby and reported in her article The Effectiveness of Feedback on Energy Consumption shows that feedback has an impact on reducing household energy consumption by about 10 percent without making any important lifestyle changes. Therefore, comparing this amount to the United States Energy Information Administration data, one can readily see that we actually could save as much energy as the U.S. produces by wind and solar with simple and easy changes.

The Green Machine: Work in Progress

We believe that simply showing data visualizations, as basic Smart Grid software will enable, is not enough to make people effectively reduce their energy consumption. We turned to persuasive techniques and the study of the behavior-change process in combination with context of use, user-interface analysis, and information visualization in order to find ways to design a product that will make people reduce energy consumption. The result of our work, the Green Machine, is a mobile phone application based on five main functionalities:

  • Providing feedback about one’s energy consumption in comparison to personal goals
  • Displaying a vision of the future linked to that consumption
  • Enabling social interactions with social networking and energy-consumption comparisons
  • Offering tips to reduce one’s ecological footprint
  • Providing individual or team-based competitions and games.

We have developed a prototype and are currently testing it to collect feedback about the application’s usability and usefulness, as well as users’ impressions about the motivational aspects of the design. User test analysis will be followed by a final redesign phase of the Green Machine application.

Background Research

Persuasive technology, which activates an objective to motivate people to perform beneficial actions, has appeared in many fields during the last decade. Persuasion is defined by Dr. B. J. Fogg, director of Stanford’s Persuasive Technology Laboratories, as “an attempt to shape, reinforce, or change behaviors, feelings, or thoughts about an issue, object, or action.” These persuasive applications have been developed for many different purposes, such as to encourage losing weight, quit smoking, or promoting sports and exercise. Each application bases itself on providing feedback to users about themselves and enabling analysis to increase their motivation and to change their lives through appropriate behaviorial changes. These feedback-based persuasive applications have shown important beneficial results and have been applied to enivronmental sustainability.

Smart Grid technology makes it possible to provide this kind of instantaneous feedback about energy consumption. The challenge of designing persuasive user interfaces oriented towards the environment, however, is that most people are not intrinsically motivated to care about and change their behavior, as emphasized by Tscheligi and Reitberger in Persuasion as an Ingredient of Societal Interfaces. On the other hand, people with high social awareness tend to be unsatisfied with minimalist feedback, as Yun showed in Investigating the Impact of a Minimalist In-Home Energy Consumption Display.

Social interactions add persuasive aspects and help to increase involvement and motivation. According to Mankoff and company in Leveraging Social Networks to Motivate Individuals to Reduce their Ecological Footprints, leveraging social networks is a powerful tool to integrate environmental sustainability into daily activities and social context. An experiment by Cialdini and others. in The Constructive, Destructive, and Reconstructive Power of Social Norms emphasized the effects of neighborhood comparison in energy savings. According to the findings, people reduced their energy consumption when they found out that their neighbors had already taken steps to curb their energy use.

Competition is another way to motivate people to increase their awareness and reduce their energy consumption. For example, the “Energy Smackdown” is an internet-based challenge between individual households to reduce home energy consumption and CO2 emissions.

Analysis

The Green Machine has two persuasion objectives: microsuasion and macrosuasion, according to Fogg’s terminology. The microsuasion goal is to make people reduce their household’s energy consumption and the macrosuasion goal is to change people’s behavior. These objectives are intrinsically linked (as short-term and long-term objectives) although they are on two different levels. To create behavioral change through the Green Machine, we defined five key elements:

  • Increase frequency use of the application
  • Motivate reduced energy consumption
  • Educate on how to reduce energy consumption
  • Persuade users to reduce energy consumption
  • Persuade users to change behavior.

Each step has requirements for the application.

Motivation is a need, want, interest, or desire that propels someone in a certain direction. From the sociobiological perspective, people in general tend to maximize reproductive success and ensure the future of descendants. We apply this theory in the Green Machine by making people understand that every action has consequences on environmental change and the Earth’s future.

screencap of mobile app
Figure 1. (Above) Machine Total Energy Use screen enables users to visualize their energy consumption in kWh (kilowatt-hour), currency, and CO2 release. The screen also shows goal-setting insights and equivalent comparisons. The Calendar function and extra features will enable other types of comparisons to be made.

chart
Figure 2. (Top) Five-step change-behavioral process that effects the Green Machine design and catalyzes specific detailed solutions.

information architecture
Figure 3. Green Machine information architecture.

We also drew on Maslow’s A Theory of Human Motivation, which he based on his analysis of fundamental human needs. We adapted these to the Green Machine context:

  • The safety and security need is met by the possibility to visualize the amount of money saved
  • The belonging and love need is expressed through membership of an eco-friendly community or belonging to a particular team in the Challenges section
  • The esteem need can be satisfied by social comparisons that display energy consumption and improvements
  • The self-actualization need is fulfilled by being able to visualize the amount of CO2 released in the atmosphere and can also be met by making donations to sustainability associations

Because setting goals helps people to learn better and improves the relevancy of feedback, the Green Machine Login page asks users how much money they want to save, or which friends’ energy profiles they wish to look up.

To improve learning, the application integrates contextual tips to explain how to reduce energy consumption. It also shows tips that have been successful for other users, and other products or services they’ve tried.

Social interaction also has an important impact on behavior change, so the Green Machine leverages social networking and integrates features like those found in forums, Facebook or Twitter.

The Green Machine is intended to come with frequent feedback, including daily energy-consumption snapshots, a future-Earth metaphor, and social interactions, such as energy comparison, friendly challenges, or added comments. We also aimed for long-term use because, as Darby explains, it takes over three months for behavior change to become permanent.

To meet user requirements for a convenient, accessible product that matches the context of use, is suitable to other activities, is always on, and is at people’s fingertips, we decided to make the Green Machine a mobile phone application. This choice makes it available on the most common and well-known electronic device in the world today.

screencap of app
Figure 4. Green Machine Total Energy Use Vs. Friend comparison.

screencap of app
Figure 5. Green Machine Earth in 2200.

screencap of app
Figure 6. Green Machine friend screen.

Design

The background research and succeeding analysis emphasized five main issues: feedback, future-Earth metaphor, social interactions, tips, and competitions/challenge. 

The interaction and visual design was a particular challenge on the small screen of a mobile phone.

We decided to use a tabbed navigation so that every action can be achieved in less than four clicks. Users know exactly where they are in the application architecture thanks to the screen title information. The visual design is based on typical iPhone styles because of the product’s brand image: trendy, with “early adopters” as the target market, and the application download capability. In the user interface, the small energy thermometer is always displayed at the top of the screen and shows the current household energy use.

The Energy tab (Total Energy Use) displays the kWh consumed, the money spent, and the amount of CO2 released. Users can visualize this total energy use in different time periods such as a day, week, month, or year. This energy use is automatically compared with one’s goal settings.

Social interactions are also included. Friend and celebrities comparisons enable users to select one of their friends or one of the many celebrity (for example President Barak Obama or Al Gore) using the Green Machine and compare consumption.

The Earth in 2200 screen displays breaking news according to one’s energy consumption. If there is high energy consumption, the state of the Earth is shown with dire consquences, such as increased environmental refugees, outbreaks of wars, and biodiversity endangerment. For low energy consumption, we view a healthier Earth with sufficient food, water, and greater chance for peace.

A networking tab is aimed at motivation through social interactions. Users can read and visualize news from their friends with regard to how much they have consumed, what their challenge results were, which tips were helpful, what their current energy use profile is, and to which charities they have  provided donations.

The Tips tab enables users to learn how to reduce their energy consumption. The data visualization for each tip maps the cost and the amount of potential reduction. This information gives users a direct view of the impact of tips they choose. An individual tip also shows how many friends have used it and found it helpful, their additional comments about a particular tip, its price (if it is a physical product like a light bulb), distance to the closest store to buy it, and the overall rating given by Green Machine users.

The Challenge and Games tab has different competitions for users to reduce their energy use. Individual and team-based challenges are available to meet pro-individual and pro-social personalities. A game mode offers relevant video games to help people reduce their energy consumption.

User Test

Our next step will be user tests of the Green Machine. The primary objective is to identify usability issues with the Green Machine application’s user interface. We shall also assess whether users believe the application would make it easier for them to reduce their energy consumption and whether users believe the application could encourage them to make further reductions in their energy consumption.

These tests will gather both quantitative and qualitative results. Sessions will include free exploration to gather first impressions about the Green Machine and expectations for content and functionality. Participants will then complete a task scenario with a retrospective think-aloud.

Finally, they will complete questionnaires, covering both their energy consumer and green profile and their evaluation of the usefulness of the Green Machine. We are particularly interested in whether or not users think that this application could motivate them to reduce their energy consumption.

Discussion and Conclusion

The work on the Green Machine aims to incorporate persuasion and motivation for behavior change into a mobile phone application. This project shows a possible effective use of the information from Smart Grid technology in combination with mobile technology infused with persuasive design techniques and visualization.

Although previous research seems to indicate that such an application will have some impact on energy consumption, it will be interesting to gather actual user data and to study cross-cultural differences in the results of user testing as well.

Our long-term objective for the Green Machine is to create a functional working prototype so that we can test whether it actually makes people reduce their energy consumption in the long run, under real use conditions. If our theories are proven to be correct, this could have significant implications on the use of Smart Grid software, which is slated for a significant expansion over the next few years within the United States

Beyond Player Experience: Designing for Spectator-Players

UX evaluation and design of video games (sometimes referred to as Player Experience) is an increasingly important factor in game development. While most of the current approaches are focused on users directly interacting with the game content (players), this article aims to start the discussion around UX research and design considerations specifically for interactive spectatorship (spectator-players). Spectator-players actively engage in the broadcast in some way, simply chatting with other spectators and players, or influencing or even participating in gameplay. UX researchers and designers focused on games face new challenges in designing guidelines and evaluation approaches for these spectator-players. This article, motivated by recent projects at UXR Lab, summarizes opportunities in this new area of UX and highlights interactions that affect spectator-player experiences.

It is important to start with a discussion of those elements that increase the watchability of a game and make spectator-players want to view and engage with the game content. The key factor is the game’s subject matter; it is often very difficult to motivate users to engage with entertainment content outside their interests. Hence, the first step would be to define and understand the audience. Once we know who they are, we need to understand the human factors that are involved in designing for our spectator-player audience. Could our design allow users to show off their own personal play style? Could it facilitate communication between players and spectators? Should it foster a sense of camaraderie? Should it make the spectators feel as if they’re winning or losing along with the players? Could our design create and foster a bond between the spectators and the player to motivate a deeper engagement? Understanding the underlying human factors behind these, and other similar questions, can influence and affect our design decisions. Some examples of these elements include:

Competition. Creating a sense of superiority, it makes the experience inherently more interesting for people to watch and engage. Combined with the team play element, spectator-players share a common goal that can lead to feelings such as shared defeat, victory, or accomplishment, and makes the overall experience more meaningful. These elements contribute to a sense of affiliation for spectator-players, creating and maintaining an emotional connection with the game content, players, teams, or even other spectators. Finally, in order to sustain long-term engagement and viewership, a game must be capable of providing some form of content variation. This variety can arise from the emergence of new play strategies, the unpredictability of players, the appeal of unique in-game events, or the generation of new game content.

Image of a television with 2 video game controllers plugged into it.
Figure 1. Consider design elements that sggrvy and increase the watchability of games. (Credit: Samantha Stahlke, UXR Lab)

We must also consider a set of mechanics allowing for the active interaction of spectator-players with the game, such as:

Chat Input. Arguably the simplest mechanic to incorporate since the vast majority of streaming platforms already have a chat system integrated that can be used to make spectatorship more interactive. The chat system enables spectators to enter their comments (often for social purposes), and with the addition of a simple parser, can also allow spectators to enter commands. An example of this can be found in the 2014 phenomenon Twitch Plays Pokémon, which implemented a chat system where input through the chat was filtered and parsed to create input for a port of Pokémon Red that was running on a Gameboy emulator. Similar systems could also be used to parse chat inputs into commands for voting or polling on game actions.

Polling and Betting Interfaces. These provide a more organized approach than chat input for spectator-players’ interactions, whether they are set up to receive votes on topics or decisions, or to select the next event in the game based on the community interest. Spectator-players can directly affect what is happening in the game world and watch players, streamers, or other spectators interact with that content. Thus spectator-players can engage meaningfully with streamed content with a low barrier to entry.

Cheering and Donation Incentives. Examples like Twitch’s “cheer” mechanic allows spectators to spend money (or in-game currency) on virtual tokens that can be used to display special emoticons during gameplay streams. These incentives can be used to support a particular player, game, organization, spectator, or streamer. Additionally, game streamers often offer call-outs or read on-screen viewer messages to spectator-players who donate or subscribe to their channels. This mechanic allows spectator-players an opportunity to stand out from others and provides a sense that they are really contributing or influencing the game, enhancing feelings of accomplishment.

Raw Viewership Population. Provides a sense of team affiliation and team play for spectators, which can facilitate long-term spectatorship. Game interactions are based on spectator teams, providing the spectator-players with collective rewards. A spectator team can win against individual players or another team of spectators. Being rewarded as a team facilitates feelings of group camaraderie and creates community around the game. A higher level of this interaction would be Direct Participation, which gives spectator-players a chance to play with or compete against players individually. This is a highly interactive and socially rewarding game experience. Spectator-players can play as the main character or enlist a cast of extras to control minor allies and enemies in-game.

Content or Game Modification. Allows spectator-players to directly interact with the game and influence the gameplay by modifying elements of the game world, such as dispatching enemies, sending in-game resources, supplying dialogue, or changing visual aspects of the game. This direct input over some aspect of the game world boosts the sense of consequence for their actions and feelings of control.

By combining the human factors for content watchability with design elements and mechanics facilitating spectator interaction, we can create experiences that provide a deeper level of engagement and social connection for spectator-players.

A successful game in this domain needs to meet three key requirements: a rewarding experience for its players; watchability for passive viewers; and fulfilling interactivity for spectator-players. However, the majority of current efforts in games UX evaluation deals only with the first criteria (discussed in my previous UXPA article). Thus, understanding interactive spectator-player experiences requires the development of novel UX research approaches. Evaluation of such systems must recognize and adapt to the differences in users’ needs as they shift between roles as players, spectators, and spectator-players. We need to acquire information on spectator-player behavior (for instance, interaction with the system), the reasons for this behavior, and the experiences resulting from the interaction. By understanding the relationship between their behaviors, reactions, and emotions, we gain better insights into the complexities of these experiences. This can present a challenging task when evaluating spectator-player experience (SPX), as we are blending concepts underlying player experience evaluation (such as theories for flow or fun, for example, the intentional inclusion of challenges to make the experience fun) with those behind usability evaluation in web-based or productivity applications (such as evaluating ease-of-use in usability testing to remove or restructure any possible constraints).

A potential solution would be the use of a mixed-methods approach leveraging both analytics and other user research techniques, like interviews and focus groups. While game analytics can provide an effective source of continuous data regarding spectator-players’ actions, other methods can help to contextualize this data and provide researchers with a more complete basis for analysis.

An example of this mixed-methods approach can be found in a project at the UXR lab, where spectator-players interacted with a game through in-game currency that could be used to purchase items that would affect the gameplay. Currency management was the main mechanic for the spectator-players. The key goal of our evaluation study was to help developers understand the currency system of the game and spectator-player spending habits, and to measure the impact of purchasing on player engagement. Because this was a live steaming game, we chose to have all the study participants play the game in the same general session as other online players. We tracked participants’ information, time, and purchase data for every single change-of-currency event in the database. It was very important for us to know the total amount of in-game money spent by participants and the breakdown of spending between items. To achieve this, we created database queries that would format the data in various pivot tables (see Figure 2 below).

Figure 2. Currency spend per spectator per item
Figure 2. Currency spend per spectator per item. (Credit: Thomas Galati, UXR Lab)

We also conducted a post-gameplay focus group to gather collective feedback from all the participants in the gaming session. Focus group questions were crafted based on the participants’ purchase metrics gathered during the gameplay sessions, targeted around the primary research goal of understanding player spending habits. For example, we could ask questions about specific participant behavior such as why User 2 (see Figure 2) did not purchase Item D when all other participants had. Metrics collected during the gameplay were used to structure the focus group discussion and helped us to identify what the player-spectators felt were the most impactful items, as we could see and ask them very specifically about their purchases. Our MIGS 2017 presentation (indicated in the further reading section below) provides more evaluation examples.

This article aimed to present design opportunities for the development and evaluation of novel games aimed at delivering interactive spectator-player experiences. This new category of games will need to effectively engage players, spectators, and spectator-players to fulfill its potential, providing both enjoyable gameplay and compelling entertainment. The mechanics discussed may be applied outside of the games industry, having wider applications in the field of interactive entertainment, such as interactive television programming and sports broadcasting.

[greybox]

Acknowledgment

This article is motivated by recent projects at UXR Lab where we contributed to two under-development game IPs. I would like to thank and acknowledge the researchers who were involved with these projects: James Robb, Samantha Stahlke, Thomas Galati, Naeem Moosajee, Nour Halabi, and Atiya Nova. This article summarizes the key lessons we learned from these projects. For more information, please refer to the further readings below.

Further Reading:

  • Pejman Mirza-Babaei and Samantha Stahlke. Designing and Evaluating Spectator Experiences. Presentation at Montreal International Game Summit (MIGS), Dec 2017. [Slides] [Video]
  • Samantha Stahlke, James Robb, Pejman Mirza-Babaei. The Fall of the Fourth Wall: Designing and Evaluating Interactive Spectator Experiences. International Journal of Gaming and Computer-Mediated Simulations (IJGCMS).
  • Pejman Mirza-Babaei and Thomas Galati. Affordable and data-driven user research for indie studios (2018). In Games User Research Book, Anders Drachen, Pejman Mirza-Babaei, Lennart E. Nacke (Eds). Oxford University press (2018).
  • Anders Drachen, Pejman Mirza-Babaei, Lennart E. Nacke (Eds). Games User Research. Oxford University press (2018).

[/greybox]

 

 

Big Behavioral Data: The Key to Eliminating User Frustration

The progression of UX research methods has led us closer and closer to getting the real, genuine user experience. User testing improved on the methodology of focus groups, trading discussion for observation and putting usability at the center. Remote user testing technology has removed much of the artificiality of lab testing, allowing users to re-create more representative experiences from their own homes on their own devices.

What’s the next step? How do we get even closer to understanding the user experience on our products? Big data may hold the answer.

Cloud-Based UX Tools and the Problem of Scale

Cloud-based online tools for doing UX research have taken a small first step in the right direction: Remote user testing companies like ours, for example, offload much of the specialized work of conducting usability studies and enable companies to regularly collect usability data.

However, these tools tend to operate at the small scale of 5 –15 participants. Forays into crowdsourcing or big data methods have been rare, and often accessible only to enterprise firms with huge budgets and well-staffed UX departments.

Furthermore, the centrality of qualitative observation to UX work means that large datasets inherently have a high barrier to analysis. Video minutes add up fast; just 10 user tests could take five or more hours to watch (shown conceptually in Figure 1).

Graph showing a conceptual straight line comparing user video times to duration of other activities
Figure 1. Video data from user testing adds up quickly.

Some tools provide quantitative data or mixed qualitative/quantitative data like heat maps, clickstreams, and eye tracking, but without heavy-duty qualitative/descriptive data like user videos. All these are still “what” tools, not “why” tools. On their own, they do not show why users do what they do. They cannot expose the emotions and stories of the real user experience.

The Right Big Data for the Job

Which data is the big data that can benefit the UX researcher? Which big data shows “why,” and can be analyzed in a timely and useful way?

There is a treasure trove right under our noses: behavioral data.

Users’ behaviors on a website form the core of the user experience. Currently, the tools we use to track and understand website usage (like Google Analytics) use the pageview as the basic unit of a user session. For users, though, pages are generally not the building blocks of their experience.

Interactions—clicks, taps, scrolls, zooms, mouse movements, keyboard input, navigation—are what make the experience. Interactions provoke emotions in the user. An obvious, pleasing, smooth interaction provokes joy; a confusing, complex one causes frustration or anxiety. The accumulation of a user’s emotional responses to all these little moments are a chief factor in how the user will ultimately feel about your website.

By observing the way users interact with a website, or any user interface, we can define their behavior and tell the story of their experience. Distinct interaction events (for example, Javascript events) tell a whole story of user behaviors that can be mined for insights and patterns. All it takes is to collect the usage data from every single session (say, all the Javascript events from all your website sessions, shown in Figure 2), and see what behaviors users engage in as they use a product or service.

A log showing a string of timestamped clicks, scrolls, and keypresses, pulled from a user’s session on a website
Figure 2. Analyzing interaction events from a Javascript log can reveal details of the user’s behavior.

The great thing about behavioral data is that it tells both qualitative and quantitative stories. Because of the broad similarity of the types of behaviors people engage in, it’s possible to look at behavioral trends on a huge scale—hundreds or thousands of participants—and analyze what these trends mean.

If we can generalize patterns of behavior and then identify them in the usage data, it’s possible to learn more about how people really use our websites than can ever be derived from Google Analytics.

A good example of a generalized behavioral pattern is the “rage click,” in which users click rapidly and repeatedly on an unresponsive element (more on rage clicks later). By recognizing rage clicks as a recurring behavioral pattern and then identifying the interaction sequences that mark rage clicking, the behavior can be observed on a large scale.

That’s the big data that will improve the work of the UX professional—the data that answers the question, “What exactly are people doing on our websites?”

What Does Big Behavioral Data Look Like?

By capturing user session data, for example with Javascript events, we can essentially rebuild each session as a video, indexed by behaviors including clicks, scrolls, navigations, and more. Thus, every session can be available to review as video content or a log of interactions, shown in Figure 3.

A screenshot of a library of user sessions with technical information, time stamps, duration of the video and a link to play it.
Figure 3. Session-capture technology can collect behavioral data and reconstruct user sessions to put every user’s experience at the researcher’s fingertips.

While this means the UX researcher faces a mountain of time-consuming video, it is indexed to the smallest detail and thus highly searchable. Behaviors can also be algorithmically detected so that the researcher knows exactly which sessions to watch and where.

So, for example, after capturing thousands of user sessions the researcher might identify the sessions containing rage clicks, and watch only those; or, if analytics have shown that a problem exists on page Y, he might watch only sessions that included that page to look for the issue.

5 Things to Learn By Capturing User Behavior Data

The challenge with big data is finding a useful, efficient way to harness it and learn from it. Here are five things we can do by collecting big behavioral data from user sessions:

  1. Better understand how people are really using a website
  2. Identify common behavioral patterns and extrapolate UX and usability issues
  3. Eliminate causes of user frustration quickly
  4. Prevent conversion-killing issues from persisting unnoticed
  5. Drive the agenda for deeper, targeted research studies that reflect actual usability needs

Each of these furthers the goal of the UX professional in building and maintaining a website that is easy to use, tolerant, and serves the needs of users.

1. Understanding actual website usage

How is this kind of data different from something like Google Analytics? These web analytics tools show where people go, what paths they follow, which page they landed and exited on, but they don’t show you the experience.

Experiences are composed of interactions, not a series of pages. The way the users interact with what’s on each page will define their experience, but that kind of behavioral data is below the threshold of what Google Analytics will show you (see Figure 4).

A chart comparing 3 types of data (Google Analytics, big behavioral data, and usability testing) on a variety of characteristics: the basic measurement uint, scale, type of data, human or business-centric and what answers it provides.
Figure 4. Big behavioral data shares similarities with both usability testing and Google Analytics, but has advantages over both in some areas.

Big behavioral data, like usability testing, can show you qualitative insights about usability issues that go below the surface of page views and conversion flows: ways that people use pages or tools differently than you expect, and ways they react or respond to UI elements differently than you imagine.

Another benefit big behavioral data shares with usability testing is the ability to watch complete sessions as video, from beginning to end. This helps the UX researcher process the user experience in human terms—as an individual’s story. It is also useful for showing stakeholders potential pain points to persuade them of the wisdom of particular design proposals.

Big behavioral data is also different from usability testing, however, because:

  • It’s taken directly from actual website usage, and therefore real.
    The artificial sheen of user testing, which attempts to replicate a genuine user experience, is removed completely. Even remote tools, where users test on their own devices from their own homes or workplaces, do not fully achieve this.
  • You have many more sessions to look at.
    User testing studies typically don’t exceed the range of 5 – 50 tests. With a library of every single user session in front of you, you can make reliable, statistically sound UX judgments based on thousands of data points.

2. Identify common behavioral patterns, extrapolate UX issues

The ways people interact with online interfaces obviously varies from person to person (for example, “active” mouse users who drag the cursor all around the page tracing every action and exploring every corner, versus “passive” mouse users who tend to leave the cursor idle while they explore with their eyes). Yet, there are many identifiable behavior patterns that all or most people engage in.

This fact of shared behaviors across a broad spectrum of users is the most important key to interpreting big behavioral data, and then using large scale analysis to improve UX research methods. Two steps can help us to harness it:

  1. Single out behaviors that suggest problems, frustration, or confusion.
  2. Figure out how can they be machine-identified from a massive dataset.

Rage click

Take, for example, the rage click.

Even if you’ve never heard it called this before, you know the behavior. You click a link, a button, an image… And nothing happens. You give it a second. Still nothing? You click again–then again, then again and again in a rapid barrage.

This basically subconscious response is an example of a nearly universal user behavior that clearly indicates the user’s frustration and anxiety.

It can also be easily identified with a few parameters: all instances of, say, 3 or more clicks in a span of perhaps 1 second or less. Whatever those numbers are, they can be tested and optimized for accuracy, until a pattern-matching tool can reliably categorize all such behaviors as rage clicks.

What does a pattern of rage clicking tell you? Either you have a bug or a usability issue. If your users are trying to click an element that’s not actually clickable, you’re likely facing inconsistency in your visual language that has led people to misinterpret that element. It may also indicate that a page or interface is missing something that users are looking for, causing them to click on anything out of desperation.

More behaviors

Other behaviors can be identified by similar means.

Mouse-related behaviors (like random/wild patterns, hesitation patterns, and reading patterns) can indicate impatience, high difficulty level or cognitive load, or high concentration levels.

Scrolling behaviors, such as random scrolling (rapid up and down), can hold insights about your scent of information. Are users scrolling through lots of content they find irrelevant to their purposes? Are they struggling to find the right information or calls to action?

Telltale navigation behaviors include backtracking and the related pattern of pogo-sticking (repetitive back-and-forth navigations between one “hub” page and many “spokes”).

Navigation actions can be interpreted, according to some behavioral models, as either forward movement or impasse. In other words, did the user get closer to a goal or take a wrong turn? When someone backtracks, what signs pointed them in that wrong direction? (Or, is it just an exploration style? Does your information architecture force excessive clicking?)

All these behavior patterns, and more, can be defined with quantifiable parameters and used to learn more about the user experience.

3. Eliminate bad UX quickly

Once parameters are in place (even better if they are self-improving with artificial intelligence), you can immediately see each instance of these user frustrations and watch why it happened.

Then, you can act quickly to either fix what caused the user frustration, or launch into deeper research on the issues discovered, depending on what the situation demands. In many cases, other types of research (like user testing) will be necessary, or at least helpful, to add understanding and clarity to the issue.

4. Prevent conversion-killers from hiding in the shadows

UX issues are always hiding among the conversion pathways of our websites—nothing is ever optimized to perfection. But silent conversion killers like a broken button or an information disconnect can be lurking without our knowledge.

With session-capture technology, we can be constantly watching for these problems so they don’t persist for months and bring down conversions, signups, sales—whatever goal you have for your website.

In the words of Disney’s Pocahontas: “You’ll learn things you never knew you never knew.”

5. Drive smarter, targeted research that reflects real needs

One of the biggest issues with usability testing is that people don’t know what to test. So, they test the new feature; the old pages that are up for a redesign; the pet project. What about the rest of the site? How do you know where the big bad issues are?

Big behavioral data methods look at the whole website, collecting all the usage data to show you exactly where the issues actually are. They point you in the right direction.

Here’s an example from our own experience. We regularly run tests on the new prototypes and flows we create, and on the old ones we want to redo. However, when we implemented our own session-capture technology on our website, we discovered instances of rage clicking on an important page we hadn’t been testing: our pricing page. It seemed some users thought bolded feature names under our plan types were clickable.

Clearly this represented a usability issue; users were misinterpreting our visual communication, believing that the bold font of these elements implied they were clickable.

We ran user tests to get a better understanding, and from this feedback we discovered a more fundamental disconnect on our marketing pages between information about our features and our plans.

The plans page lacked sufficient descriptions of our features for users to adequately appreciate what they were getting. Meanwhile, the features page lacked the necessary integration with information about our plans to tie together these two important pieces of information.

We also looked at navigation data on Google Analytics and noticed some patterns that confirmed this problem. The flow of traffic from the plans page to the features page was actually 29.7% higher than the flow of traffic from features to plans. As plans is one of our most important conversion pages, this meant that a lot of users were essentially going backwards in the flow due to this information disconnect.

This kicked off a re-evaluation of how we communicated about our features and plans to new users, and we are currently renovating the way we approach this information to improve users’ understanding and increase conversions.

Relying on user behavioral data to inform UX research and strategy is for the most part new, yet a logical step in getting closer to understanding the true, real experiences of people using our designs.

Becoming a UX Researcher in the Games Industry

Why User Experience Research Is Growing as a Career in Games

The games industry is always transforming due to changes in player demographics and business models. The introduction of new technologies compels developers to create new kinds of games interactions and experiences. For example, as new publishing and distribution models like Free-to-Play (F2P) and Games as a Service (GaaS) gain popularity, the player experience and long-term engagement have become increasingly crucial factors for achieving commercial success. Given the constant transformation, the value of User Experience researchers (UXR) in the games industry is more prominent, creating exciting new career prospects.

This shift toward establishing UX research-focused roles is noticeable in the growing number of industry organizations and events focusing on this domain. For instance, in 2009, the International Game Developers Association (IGDA) established a dedicated Special Interest Group (SIG) for Games User Research (GUR), which was later renamed Games Research and User Experience. The SIG runs annual summits, mentoring programs, and hosts an active Discord Community. The Game Developers Conference (GDC) responded to this shift by introducing a conference track dedicated to Games UX in 2017.

Becoming a UX Researcher in the games industry can be an attractive career path. It offers a unique opportunity to merge one’s passion for gaming and research into a profession that can have considerable impact. Games UX Researchers advocate for players and improve their gameplay experiences based on research data and insights. They work closely with game developers during the development process to ensure that the final product creates a fun and engaging experience for players and meets designers’ experience objectives. This role allows UX Researchers to witness how their research contributions and recommendations translate into better player experiences, which can be incredibly satisfying. A notable sense of fulfillment comes from contributing to the success of games that many players enjoy.

The games industry is at the forefront of innovation and creativity, where technological advancements and artistic expressions are combined to create new experiences. Games UX Researchers are well-positioned to take advantage of this by constantly adapting their methods and techniques to make the most of new opportunities and match the evolving gaming industry. This adaptability can keep the job fresh and exciting, providing new challenges and fostering a continuous learning environment. This adaptability also contributes to the intellectual aspect of the career as it is founded in understanding user behavior, psychology, interaction design, and product development. This combination can further offer personal intellectual satisfaction while building demand for this expertise in the job market.

Additionally, diverse career opportunities within the gaming industry add to the appeal of this career path. Games UX Researchers can choose to work with game development studios, publishers, firms specializing in UXR, or as freelance consultants offering their services to multiple companies. This diversity means they can follow a career path that best matches their personal and professional goals. Moreover, given research skills are highly transferable, Games UX Researchers can relatively easily change career domains and apply their knowledge to other fields of interactive systems to gain diverse career prospects.

The Skills Required for Games UXR

Games UXR allows people to combine a professional research skill set with a field they feel passionate about. One of the most crucial skills that games UX Researchers regularly demonstrate is understanding how to design, run, and analyze reliable user research studies that address the questions that emerge when developing games.

Some research objectives that game development needs to answer include discovering if experiences are fun, how to optimize that fun, and how to teach game mechanics effectively. Table 1 highlights key differences between games and other interactive systems that impact games’ UXR objectives and approaches (for more details, you can read Getting Ahead of the Game: Challenges and Methods in Games User Research,

a previous UXPA article on this topic).

Table 1. Differences between games and other applications. (Adapted from Pagulayan’s “User-Centered Design in Games” in Human-Computer Interaction Handbook (2003).)

Games vs. ApplicationsExamples
Process vs. resultsThe purpose of gaming is usually in the process of playing, not in the final result.
Imposing constraints vs. removing or structuring constraintsGame designers intentionally embed constraints into the game loop, but productivity apps aim to minimize constraints.
Defining goals vs. importing goalsGames (or gamers) usually define their own goals or how to reach a game’s goal. However, in productivity applications, the goals are usually defined by external factors.
Few alternatives vs. many alternativesGames are encouraged to support alternative choices to reach the overall goal, whereas choices are usually limited in productivity applications.
Functionality vs. moodProductivity applications are built around functionality, but games set out to create mood (for example, using sound or music to set a tone).

Game development offers a unique environment to deploy a research skillset, so researchers are required to understand the medium of games. This is not only about knowing how to have conversations with players but also about understanding the game development process and disciplines involved. The process of making games has developed separately from other software development approaches, and many of the product processes, terms, and disciplines will be unfamiliar to someone from another industry. Specifically, a researcher interacts with producers, game designers, insight professionals, and UX designers. Understanding how games are made and who makes them is necessary to be an effective games UX Researcher.

Finally, communication is a core skill for games UX Researchers. Studies uncover opportunities and problems that stretch across the whole company, so a multidisciplinary effort with designers, producers, and developers involving the whole team is required to prioritize and fix issues. Communication skills, confidence in giving presentations, and relationship-building skills with colleagues are pivotal. Ensuring that research findings can be understood—that findings feel relevant and important to a broad range of colleagues—is essential for success.

Getting into the Games Industry

Because it’s an industry people feel passionate about, available roles can be infrequent. There is a lot of competition for games UX Research roles when they open. This means that candidates must demonstrate their mastery of the core skills to stand out.

To develop these core research skills, many people come into the field with post-graduate study (master’s and PhD levels). In the IGDA’s 2019/2022 survey, 24% of people working in games UXR had a PhD, most commonly from psychology, HCI, or neuroscience backgrounds. However, academia isn’t the only way to get and demonstrate study design experience. Hiring managers are often open to applicants who already work in UXR in other fields that have given them experience in qualitative and quantitative research. Working with hobbyist game developers to gain experience applying user research methods to games will help build confidence that you’re ready to apply your skills to games.

In smaller teams without a dedicated UXR role, designers, producers, and quality assurance managers often run studies referred to as playtests. With support from a mentor, courses, or the wider research community, one can get the experience necessary in designing and running studies at a professional level. 

For people entering the games industry, hiring managers will want evidence that candidates understand how games are made and that they can have constructive conversations with colleagues from other disciplines. This can be a particular challenge when trying to join the industry at a senior level because candidates are expected to be able to represent the games and UX disciplines immediately. Luckily, some great books and talks introduce game design and development, which will help aspiring games UX Researchers understand the domain. A Playful Production Process, by Richard Lemarchand, the former lead designer of Uncharted™, describes their approach to developing their hit games and how UX research fits in. The Game Designer’s Playbook by Samantha Stahlke and Pejman Mirza-Babaei, also gives helpful context on designing fun interactions for games.

As mentioned above, games UXR has an active community, including conferences and newsletters. Finding a network on social media or Discord™ and following industry discussions may help make integration into the games industry easier. The IGDA’s Games Research and UX Discord community is a great place to get started alongside curated lists of games researchers on X (Twitter).

The Changing Role of a UXR

Similar to other careers, games UXR offers the potential to develop your skillset as you advance. At some companies, the most junior roles are focused on moderation and execution of studies. They can be the team member who spends the most time face to face with players while asking questions, administering surveys, and observing studies. This is frequently done in partnership with a more experienced researcher who has designed the study. This role can create opportunities to develop study designs and analysis skills and understand how to apply them in the industry.

Mid-level researchers are typically trusted with the end-to-end development, execution, and debriefing of a study, including working with a development team to confirm a study’s focus, deploying a range of research methods to gather reliable data, and drawing that together into a clear and compelling debrief.

As a researcher progresses in seniority, relationship building and proactively advocating for the discipline becomes important. Senior researchers are expected to help teams unfamiliar with user research determine what’s important to test and how. Senior researchers can often expect to be the sole representative of their discipline in a team, so persuasion, communication, and influence become increasingly important. It can be a very rewarding career for people who are interested in using their research skills to understand people (not just users, but also their colleagues).

As a researcher develops further, they can often decide whether to focus on coordination and people management of other researchers or to take a principal route that focuses on deep skill development and becoming the go-to person for a specific method or focus area. Focus areas include accessibility in games, player trust and safety, particularly in online multiplayer games, platform-focused areas such as virtual and mixed reality (VR/MR), or techniques such as reliably measuring attitudes or retention over time.

A Career on Hard Mode?

Despite the rewards, game development can be a demanding career, and it’s not uncommon for people to leave the games industry after five to ten years.

Depending on the team you’re working with, some roles can be repetitive. In a single studio, a researcher may work on the same title for many years, with similar study designs, which can become repetitive. A lot of UXR for games is focused on evaluative methods, such as usability testing or gathering ratings, which can offer limited opportunities to apply a wider research skill set.

Like many industries, games UXR lacks recognition at senior levels, and roles beyond the director level are currently uncommon. Some people move to adjacent roles, such as design or production, to continue developing their careers.

Game development is also known as an unstable environment, with layoffs common when games fail to hit critical success (or even after a hit game, due to poor planning!). Unless a researcher is based at a large established publisher, they may have to change roles unexpectedly or relocate for a new position, which can be difficult to balance with care responsibilities or a family. Many game professionals decide to seek more stability later in their career and eventually leave games.

Because games is a passion industry, wages can often be lower than equivalent roles in other industries such as tech and finance. This can become increasingly challenging in high cost-of-living areas. Ultimately, working in games isn’t always fun. It is an industry that players (and colleagues) have a deep passion for, and they are often deeply invested in their craft and creating a positive experience for players. For many people, especially earlier in their career, this is an attractive offer!

The Future Trends in Games UXR

As discussed, technological and business advances have constantly reshaped the games industry, allowing developers to make novel forms of interactions and experiences. These advances also pose challenges and opportunities for games UXR; for example, researchers may be the first to evaluate new interactive experiences and their social implications to further our understanding of humans and play. In this section, we will highlight some of the technological trends currently shaping the future of UXR in games.

Development of Novel Interaction Methods

We’ve come a long way from gameplay experiences that were only focused on a single-player using standard devices like a mouse, keyboard, and screen. Nowadays, online gaming platforms and streaming services have made large-scale interactions between massive groups of players commonplace. Technologies such as virtual and mixed reality (VR/MR) headsets and the widespread use of augmented reality (AR) in smartphones enable compelling, immersive experiences accessible to many people. Furthermore, cutting-edge technologies are pushing the boundaries of interaction design by exploring sensory channels beyond sight and hearing, including touch and smell. This expansion is opening new possibilities for games.

Advances in User Data Collection

As business models trend away from traditional boxed releases toward games that develop long-term relationships with players, game developers and UX Researchers are collecting more data related to player behaviors, preferences, and game performance metrics. This data allows for creating highly detailed profiles of individual players, including factors like purchase history, play session durations, in-game actions such as combat style, and time taken to solve puzzles. It also enables UX analysis of larger player populations and answering high-value business problems, such as optimizing player retention and in-game item sales.

Automation and Artificial Intelligence (AI)

Game development can be a very time-sensitive environment, and development has traditionally heavily relied on skilled human labor for tasks like game creation and UX evaluation. Procedural content generation (PCG), which uses automation to assist in generating game content, has been a well-established practice in the industry. Nowadays, content creation is evolving further as AI technology becomes more sophisticated. Games UX Researchers are exploring whether AI technology can help aid game testing and data analysis.

Opportunities to Learn More and Develop

The demand for games UXR is rising as research plays a pivotal role in helping developers achieve their player experience goals. Yet, many game studios, particularly smaller teams, lack dedicated UXR personnel or existing research staff and cannot meet the research demands. Maturing and scaling research in the gaming industry continues to be important.

While expanding a research team may not always be feasible due to budget constraints and prioritizing other development talent, there’s an alternative solution: democratizing research. This approach involves empowering and educating non-research team members to conduct research effectively. However, it needs support, such as covering educational costs, incentivizing learning initiatives, providing access to learning resources, and offering mentorship. It’s important to note that there are associated risks, such as maintaining research validity and concerns about the impact on established research teams. Deciding which projects should involve the research team and which can be handled by non-researchers by providing the necessary resources is essential for success.

Further reading might include the Games User Research book, which offers an extensive collection of insights and best practices from over a dozen games UXR experts. The book covers topics such as planning user research, obtaining actionable insights from research, and determining the most suitable methods for various scenarios. Steve Bromley’s book How to Be a Games User Researcher applies lessons from running the International Game Developer Association’s Games UX research mentoring scheme to help people start their career. It covers research methods, game development, and career tips. His website, https://gamesuserresearch.com/, offers further career guidance and help, including a free book sharing the secrets of games research hiring managers.

Building the Next-Gen UX Team: Strategies for Cultivating Generalists

Broad knowledge of various UX skills vs. deep knowledge in a single discipline
Figure 1. A “T-shaped” person needs to have depth and expertise in a single discipline before branching out to a broader range of user experience skills.

If you entered the UX industry in the mid to late ‘90s, you probably did a little bit of everything, which helped you get through some of the lean times. However, as UX eventually moved into the enterprise, UX generalists began to disappear and you had to choose a side: I’m an information architect; I’m a user researcher; I’m an interaction designer; I’m a developer, etc.

But over the past few years we’ve seen the success stories of how small groups of designers and developers have quickly launched complicated apps and sites. At the same time, the field of user experience has evolved to the point that designing an experience is not tied to a specific discipline. User experience professionals all share a common skill—the ability to design experiences—and the common practice of user-centered design. Design legend Khoi Vinh refers to this as a “move back to generalists,” but I prefer the term “T-shaped people” coined by Tim Brown, the CEO of IDEO (see Figure 1).

A T-shaped person has a strong, deep, vertical skillset in a single discipline, as well as a broad, shallow set of skills in related disciplines. A good example of this is an expert visual designer who can quickly build a prototype in HTML and has a budding appreciation for the basics of interaction design. When you have a designer like that, you can find opportunities to deploy them in interesting ways beyond their primary area of expertise: as a rapid prototyper for testing support, as a liaison to your front-end development group, or as a sole practitioner on small site updates.

The rise of agile development is making someone like my fictional visual designer highly valued. Agile teams aren’t typically set up for multiple UX designers (to be honest, they don’t always seem to be set up for UX at all, but that’s a different article). Multiple UX practitioners on an agile team add complexity and headcount. While there are many enterprises that will support a full UX team on an agile project, you may find that if your UXers are specialists, they will be underutilized from sprint to sprint. In many organizations with agile teams, using a T-shaped person who can reach out to a UX specialist when necessary is more sustainable.

Developing a Cross-Training Program

At Walmart.com, we turned to T-shaped people to address a critical shortage of interaction designers. Too often, we just threw these people into the fray and tried to support them as best we could. But we have also taken a more deliberate approach. Several years ago, we developed an information architect training program, and the graduates became some of our most highly-valued employees. More recently, we’ve begun working on a program that guides the UX professional on the journey from specialist to T-shaped person.

To be effective, T-shaped training should emphasize hands-on experience—think 70 percent hands-on and 30 percent exposure and education. We’ve approached both ends at the same time. We’ve asked writers and designers to do interaction design on projects solely based on need, and we’ve rolled out more formal training and mentoring. Our cross-training efforts are still a work in progress, but here are five steps that you should consider if you launch a cross-training program of your own.

STEP 1: Find Your Teachers

Identify key discipline “masters” within your organization who also have an aptitude for teaching and delegating. This second part is important. We all know people who are great at a particular discipline but have absolutely no facility for delegation and are much happier just doing the work themselves. Those people may not make the best teachers because if they don’t have the patience to delegate, they won’t have the patience to keep explaining a concept until it finally sinks in.

Look for the person who is a great designer and is great at leading teams. If people want to work with that person and fight to get on a project with them, chances are they’ll make a good teacher. Look for the designers who clearly feel empathy for the people they design for—they’ll show the same sort of empathy for their students and will find a way to pass on knowledge.

While you can rely on these masters for initial, structured training (for instance, providing a four-hour workshop on the basics of typography), most of your generalists’ growth will come from project work. However, it’s not enough to turn your budding T-shaped people loose on a project and expect them to figure it out. They’ll need careful mentoring and clear goals. You’ll need to spell out how to move from novice to intermediate on a particular skill, and provide concrete examples of what intermediate looks like. Your master practitioners will be able to help with this through mentoring and work reviews.

STEP 2: Pick the Right People 

Our job as leaders and managers is to ensure that we don’t develop “jack of all trades, master of none” UXers. There is still room and need for deep expertise, and your candidate should be deep in one discipline before you commit time and energy to making them more T-shaped.

Good candidates need to be curious self-starters. While they probably learned their primary discipline in school or through a formal training program, many of their new skills will be acquired either on the job or through self-directed training. They also need to express interest. There’s no point in forcing someone who is perfectly happy practicing their own discipline down a path they’re not interested in taking; you will just make them feel unsuccessful. Look for the person who is always trying out a new tool or sending out links to interesting articles to the UX group.

Humility is also a useful quality in candidates. They need to be able to know when they have reached the boundaries of their T-shapeness. In other words, they need to know what they don’t know. The last thing you want is to have someone get into an awkward situation because they didn’t know they were in over their head.

STEP 3: Define a Structure

Simply training people isn’t enough. You need to connect the training to their careers and their personal growth. Does your company have an independent development plan program in place? If it does, piggyback on that and use it to further your employees’ own career goals with cross-discipline training. If it doesn’t, find a way of measuring and tracking progress.

For example, pick a skill that is associated with a discipline. For the discipline of interaction design, that might be the ability to capture business processes in flows. Work with the employee wanting to acquire interaction design skills by rating them on the aforementioned skill on a scale from one to ten. Then, figure out what it would take to bump that skill up a level. Do you ask the employee to flow out some existing processes? Do you have her work on the flow for her next project? Then, when she gets to the level you specified, think about what it would take to move her up one more level.

Do you have a job description that’s aligned with the new skill(s) being taught? If you do, and if it’s good, you probably have the material for the exercise above. If you don’t, or if it’s bad, take some time to think about what a beginner, intermediate, and advanced practitioner in a discipline can do. Also, do you have criteria for job switching? If you don’t, start to think about what will happen when one of your well-trained writers wants to become an IA officially.

You don’t have to get all of this in place before you start your training program, but when employees begin asking career development questions, you’ll want to have some part of a structured environment in place.

STEP 4: Follow the Med School Model

Your employees will need to see one, then do one, and eventually they’ll be able to teach one.

But when it comes to “doing one,” you need to make sure that your employee is working in a place where failure is not catastrophic. Failing is good, and failing fast is better, but only if failure doesn’t destroy your employee’s confidence.

Recoverable failure is possible if the failure isn’t public, is not embarrassing, and if you choose a task that is not tightly coupled to critical project work. The obvious place to start is an internal initiative such as an intranet or a non-critical business tool, but a project with forgiving timelines and friendly stakeholders is also a good candidate. Regardless, pick something that is challenging.

STEP 5: Repeat

Repeat the steps, but recognize the limitations of cross-training. You aren’t going to turn an interaction designer into an expert visual designer overnight, if ever. The cross-discipline skills will likely never be as strong as the core skill. When you need an expert, make sure that you go to the specialist.

Over time, you’ll notice that your cross-trained UXers can cover an incredible breadth of work and will become your most valuable employees. At that point, you can turn them loose to build their next generation.

But Should We Do This?

Or, more to the point, why would we do this? Obviously agile is here to stay and UX generalists are great for agile. Furthermore, generalist training is a great step on the leadership path, since at some point leaders will manage cross-functional teams and will need to speak the language of their disciplines.

So, with that in mind, developing T-shaped UXers is really in service to our people and sets them up for further success in their careers. At the same time, a team of generalists gives the UX leader greater flexibility in staffing. But, the fact of the matter is that we probably don’t have much choice. At some point, the industry is going to demand T-shaped people. We need to be prepared.

 

 

随着敏捷开发的兴起,用户体验通才回归的条件已经成熟。用户体验实践与具体学科并无直接关联,不管您是信息架构师、用户研究员、视觉设计师还是内容策划师,所有实践都是以用户为中心而设计的。用户体验部门需要采取一定的策略来培养通才而不是专家。作为用户体验领导者,我们该如何做?

全文以 英语和西班牙语提供

 

애자일 개발이 대두되면서, 사용자 경험 제너럴리스트의 귀환 여건이 무르익고 있습니다. 사용자 경험의 실제는 특정한 한 분야와만 관련이 있는 것이 아니며, 정보구조설계자, 사용자 조사 전문가, 시각 디자이너, 콘텐츠 전략가들이 모두 사용자 중심 디자인을 다룹니다.  사용자 경험 부서는 스페셜리스트 대신에 제너럴리스트를 육성할 전략이 필요합니다. UX 리더로서 우리는 어떻게 제너럴리스트를 육성할 수 있을까요?

기사 전문을영어와 스페인어로 볼 수 있습니다

Com a ascensão do desenvolvimento ágil, as condições são ideais para o retorno do profissional em experiência do usuário generalista. A prática da experiência do usuário não é atrelada a uma disciplina específica: arquitetos da informação, pesquisadores de usuários, designers visuais e estrategistas de conteúdo são profissionais que praticam o projeto centrado no usuário. Os departamentos de experiência do usuário precisam de táticas para criar profissionais generalistas em vez de especialistas. Como nós,  líderes em experiência do usuário, podemos criar profissionais generalistas?

O artigo completo está disponível em inglês e espanholアジャイル開発が台頭するようになり、ユーザエクスペリエンスのジェネラリストが復帰するための状況が整ってきたといえる。ユーザエクスペリエンスは特定の分野に限られたプラクティスではなく、インフォメーションアーキテクチャ、ユーザ調査、ビジュアルデザイン、コンテンツ戦略などすべての分野においてユーザ中心設計が行われており、このため、ユーザエクスペリエンス部門はスペシャリストではなくジェネラリストを養成する方略を必要としている。このような状況で、UXのリーダーである私たちは、どうすればジェネラリストを養成することができるだろうか?

全文記事は英語とスペイン語で記載

 

Un amplio conocimiento de habilidades de experiencia de usuario versus un profundo entendimiento de una sola disciplina
Figura 1. Una persona con forma de “T” necesita un profundo conocimiento y experticia en una sola disciplina antes de abrirse a un rango más amplio de habilidades de experiencia de usuario.

Si usted entró en la industria de UX a mediados de los 90s, probablemente hizo un poco de todo, lo que le ayudó a sobrevivir en tiempos de escasez. Sin embargo, a medida que la UX se fue instalando en las empresas, los generalistas de experiencia de usuario comenzaron a desaparecer y usted tuvo que escoger entre ser un arquitecto de información, un investigador de experiencia de usuario, un desarrollador, etc.

En los últimos años hemos visto cómo pequeños grupos de diseñadores y desarrolladores han implementado rápidamente sitios y aplicaciones de gran complejidad con mucho éxito. Al mismo tiempo, el campo de la experiencia de usuario ha evolucionado al punto que el diseño de la experiencia no está atado a una disciplina específica. Todos los profesionales de experiencia de usuario comparten una habilidad común – la habilidad de desarrollar experiencias – y la práctica habitual de diseño centrado en el usuario. La leyenda del diseño, Khoi Vinh, se refiere a esto como “una vuelta a los generalistas”, pero yo prefiero el término de “las personas con forma de T” que acuñó Tim Brown, el CEO de IDEO (ver Figura 1).

Una persona con forma de T tiene un set de habilidades fuerte, profundo y vertical en una sola disciplina, así como otras habilidades más amplias y menos profundas en disciplinas relacionadas. Un buen ejemplo de esto es un diseñador visual experto que puede construir rápidamente un prototipo en HTML y además tiene conocimientos básicos de diseño de interacción. Cuando usted tiene un diseñador como ese, puede encontrar oportunidades de aprovecharlo de manera muy interesante, más allá de su área principal de experticia: como el encargado de hacer prototipos rápidos para el área de testeo, como el enlace para el grupo de desarrollo de interfaces, o como el encargado único de actualizar sitios pequeños.

Con el alza del desarrollo ágil, alguien como mi diseñador visual ficticio se vuelve altamente demandado. Los equipos ágiles no están típicamente organizados para tener muchos diseñadores de UX (para ser honesto, no siempre están estructurados para incluir UX, pero eso es un artículo diferente). Muchos encargados de UX en un equipo ágil suman complejidad y más cabezas. Si bien hay muchas empresas que cuentan con un equipo completo de UX en un proyecto ágil, se puede establecer que si esos diseñadores de experiencia de usuario son especialistas, serán subutilizados de proceso en proceso. En muchas organizaciones con equipos ágiles el utilizar personas con forma de T para que puedan recurrir a un especialista de experiencia de usuario cuando sea necesario, es más sostenible.

Desarrollando un programa de capacitación cruzada

En Walmart.com, nos volcamos a personas con forma de T para poder solucionar la escasez de diseñadores de interacción. Frecuentemente, los lanzamos a la pelea y tratamos de ayudarlos en lo mejor que podamos. Ahora hemos tomado un enfoque más consciente. Hace algunos años, desarrollamos un programa de capacitación para arquitectos de información. Los graduados de esa capacitación se transformaron en algunos de nuestros empleados más valorados. Recientemente, empezamos a trabajar en un programa que guía a los profesionales de experiencia de usuario en el viaje de transformarse de especialistas en personas con forma de T.

Para que sea efectiva, la capacitación para ser una persona con forma de T debe enfatizar la experiencia práctica – se debe pensar en un 70% de ejercicios prácticos y un 30% de clases expositivas. Se abordan ambas dimensiones al mismo tiempo. Le hemos pedido a los redactores y diseñadores que hagan diseño de interacción sólo en la medida de las necesidades, y hemos establecido una capacitación y mentoría más formal. Nuestros esfuerzos de capacitación cruzada aún están en progreso, pero hay cinco pasos que usted debe considerar a la hora de lanzar su propio programa de capacitación cruzada.

PASO 1: Encuentre sus profesores

Identifique en su organización a quienes puedan ser los “maestros” clave de la disciplina y tengan habilidades para enseñar y delegar. Esta segunda parte es importante. Todos conocemos personas que son muy buenas en una disciplina en particular, pero que no tienen ninguna facilidad para delegar y están mucho más felices trabajando solos. Ellos pueden no ser los mejores profesores porque si no tienen la paciencia para delegar, probablemente tampoco la tengan para explicar un concepto en profundidad.

Busque a una persona que sea un gran diseñador y sea bueno en liderar equipos. Si la gente quiere trabajar y participar en proyectos con esa persona, las probabilidades que sea un buen profesor son altas. Busque a los diseñadores que claramente sienten empatía por la gente para la cual diseñan – ellos mostrarán el mismo tipo de empatía para sus estudiantes y encontrarán una manera de transmitirles el conocimiento.

Si bien en un inicio usted puede confiar en estos “maestros” para una capacitación estructurada y formal (por ejemplo, un taller de 4 horas sobre los conceptos básicos de tipografía), el crecimiento de sus generalistas vendrá del trabajo en los proyectos. Eso no se trata de dejar que las personas con forma de T se enfrenten a un proyecto y vean qué hacer con él. Ellos necesitarán un cuidadoso programa de mentoría y objetivos claros. Se deberá establecer cómo moverse de novato a intermedio en una habilidad en particular, dando ejemplos concretos de cómo se ve algo que está en categoría intermedia. Los maestros serán capaces de ayudarlo en este proceso a través de la mentoría y las revisiones del trabajo realizado.

PASO 2: Escoja las personas correctas

Nuestro trabajo como líderes y encargados es asegurar que no seamos diseñadores de experiencia de usuario “que abarcan mucho y aprietan poco”. Todavía hay lugar y necesidad para una experticia profunda, y su candidato debería tener un conocimiento profundo en una disciplina antes de que usted dedique tiempo y energía en transformarlo en una persona con forma de T. Los buenos candidatos deben ser autodidactas curiosos. Mientras que probablemente aprendieron su disciplina principal en la universidad o a través de un programa de capacitación formal, muchas de sus nuevas habilidades serán adquiridas tanto en su trabajo como a través de una auto capacitación. Ellos necesitan mostrar interés. No se puede forzar a nadie que está contento en su disciplina en seguir un camino que no están interesados en tomar. Eso sólo los hará sentir poco exitosos. Busque a la persona que siempre están probando una nueva herramienta o mandando enlaces al grupo de experiencia de usuario sobre artículos interesantes que encuentra.

La humildad también es una cualidad importante en los candidatos. Ellos deben saber cuándo han llegado a los límites de sus personalidades en forma de T. En otras palabras, necesitan saber lo que no saben. La última cosa que usted quiere es tener a alguien en una situación incómoda porque se ven sobrepasados.

PASO 3: Defina una estructura

Capacitar a la gente no es suficiente. Usted necesita conectar la capacitación de su equipo a sus carreras y su crecimiento personal. ¿Su compañía tiene un plan de desarrollo independiente? Si lo tiene, úselo para expandir los horizontes profesionales de sus empleados con capacitación interdisciplinaria. Si no lo tiene, trate de encontrar una manera de medir y hacerle seguimiento al progreso.

Por ejemplo, escoja una habilidad que esté asociada a una disciplina. Para la disciplina del diseño de interacción, puede ser la habilidad para definir los flujos del proceso de negocio. Trabaje con el empleado que quiere obtener las habilidades de diseño de interacción dándole una nota en la mencionada habilidad en una escala de uno a diez. Luego, encuentre el mejor camino que podría tomar para poder subir el nivel de esa habilidad. ¿Usted le pide a ese empleado que haga flujos sobre procesos existentes? ¿Usted tiene el trabajo de ese empleado dentro del flujo de la empresa para el próximo proyecto? Cuando ese empleado llega al nivel que le especificó, piense qué necesita para llegar al siguiente.

¿Usted tiene una descripción del trabajo del empleado que se alinea con la nueva habilidad que ha aprendido? Si la tiene, y es buena, probablemente dispone del material para el ejercicio antes mencionado. Si no la tiene, o es mala, tómese el tiempo para pensar sobre qué habilidades necesita una persona en el nivel novato, intermedio y avanzado. ¿Dispone de algún protocolo para cambiarse de trabajo dentro de la organización? Si no la tiene, comience a pensar qué pasará cuando uno de los bien capacitados redactores quieran ser arquitectos de información de manera oficial.

No tiene que tener todo esto antes de empezar su programa de capacitación, pero sí cuando sus empleados quieran saber sobre posibilidades de desarrollo de su carrera.

PASO 4: Siga el Modelo de la Escuela de Medicina

Sus empleados necesitarán ver uno, hacer uno, y eventualmente podrán enseñar uno.
A la hora de “hacer uno”, debe asegurarse que su empleado esté trabajando en un lugar donde cometer errores no sea una catástrofe. Fallar es bueno y mientras antes mejor, pero sólo si el fallar no logra destruir la confianza de su empleado.
La recuperación del error es posible sólo si la falla no es pública, no avergüenza al empleado, y si usted escogió una tarea que no está amarrada a algún proceso crítico de un proyecto. El lugar obvio para partir es una iniciativa interna como una intranet o una herramienta que no sea crítica para el negocio, así como también es un buen candidato un proyecto donde se puedan perdonar las fechas de entrega y que tenga clientes amables. Pero lo más importante es escoger algo que sea desafiante.

PASO 5: Repetir

Repita los pasos, reconociendo las limitaciones de la capacitación cruzada. No transformará a un diseñador de interacción en un experto visual de la noche a la mañana. Las habilidades interdisciplinarias nunca serán tan fuertes como la habilidad principal. Cuando necesite un experto, asegúrese ir donde el especialista.

A lo largo del tiempo se dará cuenta que sus diseñadores de experiencia de usuario capacitados de manera cruzada pueden abarcar un amplio espectro del trabajo y se transforman en sus empleados más valorados. En ese punto, usted puede dejarlos solos para comenzar a construir la próxima generación.

¿Pero por qué debemos hacer esto?

¿O vamos más al grano, por qué debemos hacerlo? Obviamente las metodologías ágiles llegaron para quedarse y los generalistas de la UX son muy buenos para entornos ágiles. Además, la capacitación generalista es un gran paso en el camino hacia el liderazgo, en la medida que los líderes deberán manejar equipos multifuncionales y deberán hablar el lenguaje de sus disciplinas.

Entonces, con eso en mente, capacitar a diseñadores de experiencia de usuario con personalidades en forma de T es realmente un servicio para nuestra gente y los prepara para el éxito en sus carreras. Al mismo tiempo, un equipo de generalistas le da al líder de los diseñadores de experiencia de usuario una gran flexibilidad a la hora de escoger a sus empleados. Pero, la verdad es que probablemente no tenemos mucho sobre qué escoger. En algún momento del futuro, la industria va a demandar gente con personalidad en forma de T. Necesitamos estar preparados.

Building a UX Team: Change Is the Only Constant!

Formerly an analyst, I am now building the design team at Grameenphone Ltd., one of the most admired brands in Bangladesh and the largest mobile operator in the country. It all started with a phone call:

Caller: Hi, Moin, are you busy at this moment? Can you come over and meet me right now?

Me: Sure! On my way!!

Caller: Thanks for coming. We have a vacancy and your name has been recommended. Also, my experience working with you gives me confidence that you will do well. Want to explore something completely new?

Me: I can give a try, but what is it?

Caller: The User Experience team is in need of a new leader!

Me: What!? Me? (Few seconds pause!)  Okay! I will take the challenge.

And this is how it all started. I was given this amazing opportunity and jumped onto the ship. Now I am a proud leader of a small 5-person team of very talented team of UX professionals.

We hope to grow even bigger and better in the days to come. But no university in Bangladesh offers formal HCI/UX/Design education. There are no agencies with expertise in user research or experience design. The people I have recruited so far received their formal design education abroad, in Sweden, the USA, and India. Even though there are people who consider themselves as UX professionals, after going through 5,000+ resumes and portfolios, I was unable to find anyone with a strong background in UX or knowledge of HCI.

CEX, UX, Service Design, HCD, HCI … Wow! I Don’t Know Where to Start!

Like many other global organizations, we decided to create our own User Experience team, working hand-in-hand with product developers. As the team’s role has changed, so has its name. The team started in the Customer Experience Department in 2012 and at that time it was called Service Design and Usability.

In 2015, the team moved to the Product Department. After doing some research on the role and concept of UX, I decided to change the name to Service Design & UX. Now, we are planning to change the name again to Experience Design & Strategy.

These changes in the team name are not just words, but reflect our journey to where we are today.

At the start, my first challenge as design leader was to understand the terms the team used and how our entire organization interpreted all these terms. Even today, a significant number of people come to us asking, “Can you fix the UX of my product,” or “Can you give us some feedback about colors, so we have good UX?” I respect the fact that they come with belief that UX is important, but UX is so much more than these questions.

This approach wasn’t good for the team, either. When I checked the Key Product Indicators (KPIs) for the team, I saw that the team was chasing small requests without a focus and their contribution to the business was unclear.

Building a strong UX team required changes in both the organization and attitudes. After the initial shock, I started doing some research and discovered the concept of the “design driven organization.” This made me curious, so I started looking at other organizations to learn how they work, recruit, and educate their design resources. My search included Google, Microsoft, SAP, IBM, Facebook, Amazon, Apple, and Service Innovation Lab.

A sample of pages from the research on careers and human centered or UX design
Figure 1. I started my search with organizations that were well known in the community as design-driven companies with innovative product ideas and business models.

Based on this research I have split the department’s overall tasks into a few categories: Identifying key problems, roadblocks, and solutions.

The key problems identified were:

  • The organization’s understanding about design, its impact, and value
  • The traditional way of work, which prevents creating real customer value
  • It is hard to hire and maintain a strong design team
  • Challenges with internal infrastructure for creative and collaborative work

Based on these issues, I had to overhaul entire job descriptions for the team, including my own! We also redefined our key responsibilities, qualifications to be part of the team, and educational requirements.

Samples of pages (text not readable)
Figure 2. I researched many job descriptions on the web to rewrite ours.

With my new vision of the team in place, I began looking for professional UX resources, and faced the biggest challenge: scarcity of qualified applicants in the market due to lack of formal training or certification programs. I started using LinkedIn, Facebook, and a few other foreign university websites to find potential candidates for my team.

During this phase of the work to transform my team, I learned many new terms. My bookmarks list included portals of information about human factors, ergonomics, information science, psychology, adaptive technology, and more. This process and new knowledge helped me recruit two good team members. I even have two people in the pipeline if they return to Bangladesh.

UXPA home page with a long list of bookmarks on UX topics
Figure 3. From UXPA to commercial portals, the web provided information about the state of UX around the world.

Design Capability Blueprint

The journey to shape the UX team can be looked at in 6 parts. This structure is a blueprint for developing a design capability.

  • Setting up the organization
  • Developing governance and making sure people are aware of it
  • Building team (and organizational) capability
  • Developing a design driven culture
  • Organizing the work process
  • Building infrastructure to become customer-centric

Setting up the organization

This work was in two parts. The first was rebranding myself as a design leader and defining the role of an empowered leader with experience in design and leading a design team, driven by specific customer-focused KPIs.

Then I had to create a design unit that includes all of the design and other creative resources reporting to me, the design leader.

Most UX teams will include some or many different UX roles, but we have kept the formation of the team dynamic so that we can adapt according to project needs.

For example, right now we have two user research experts: one specialized interaction designer and one visual design expert. However, through training in India, our user researcher is also a usability expert. The interaction designer spent significant time educating himself as an information architect. The visual designer is learning interaction design through online courses offered on IDF and courses by Georgia Tech. Through this program of education, we keep our designers motivated about their jobs and always doing and learning something new.

If we decide to work on a new product/service concept, our team can focus on UX research. As we work on a design system, other skills come into play. There can be quite a lot of overlap between these roles, but this will keep the team (and costs) small, and allow the team to find its feet.

Once we decided the roles and the responsibilities, we defined how our UX team would collaborate with other teams. As laid out by Lean UX author Jeff Gothelf in his post on integrating user experience into Agile development, there are two options: the internal agency model, or the hub and spoke model.

  • Internal Agency Model: A UX manager acts as a gatekeeper, intercepting and divvying up incoming work to the team based on ability and capacity. This approach means that designers must figure out how to make their output comprehensible to developers. It also means that other teams will have little understanding of the UX team, and vice versa.
  • Hub and Spoke model: UXers are placed within other groups, such as product design, development, and marketing or sales. This way, designers “feel connected to (the) team’s focus. In doing so, the designer’s priorities become clear.”

We also have cross-functional teams dedicated to the development and improvement of a specific service for the duration of the service lifecycle. This is different from today’s project teams that are only assigned to a project duration that ends with the launch.

Building team (and organizational) capability

Our work to build up the capability of the team included:

  • Employee development: an internal program to build design capability by up-skilling/cross-skilling existing resources and providing them with “on-the-job” training.
  • Design training across the full organization on design value and methodology and providing practical tools for everyone to use in their daily work.
  • Job rotations is a work in progress. We haven’t yet been able to establish a mature “bi-directional” job rotation program to enable experienced resources to help other business units. However, we support other business units through virtual collaboration, which is appreciated by management and has helped designers boost their confidence.

Recruiting new team members has been the most challenging part both for HR and the design team. I have been following Julie Zhuo, VP of product design at Facebook for some time. “At a startup, you need your first one or two designers to be versatile—great jacks-of-all-trades,” she says. ”Not only do they need to deeply understand and think through product strategy, they also need to have good interaction chops and decent visual sense, since they’ll be doing everything from designing the UX to thinking about the brand to designing icons—they need to have a diverse skill set.”

According to Zhuo, finding the ideal designer who fits your needs is a two-part process. First, you have to find promising candidates (she runs through three concrete steps). And second, you need to decide if they’re right for your team—which can be trickier than it seems.

With the help of our HR team, I have started visiting various universities as a guest speaker to build awareness about the subject and career opportunities. To build the capability and get some great minds in this field, we have initiated several internship program/capstone projects, as well. I believe this will help to find the right candidate for my team.

Developing a design-driven culture

Almost every designer I’ve met (including myself) has, at one point or another, felt they were working in an organization or with people who didn’t appreciate design. A year and a half into this role, I still feel that most people in our company, including the leaders of various units, don’t understand what we do.

Transformation of the telecom culture to a design culture requires a dedicated program. The program must contain leadership training, communication, best practices communication, leaders walking the walk, and employees talking the talk.

Evangelizing UX is an ongoing effort that aims to promote the value of user experience to non-UXers. Reaching out beyond the UX team to get any UX-friendly executive actively on board with promoting UX is an excellent way to earn acceptance.

There are a lot of people in any organization who have no idea what UX is. We started a small education program for the whole department to create awareness about what UX really means and to explain the various methodologies, including working with personas, user interviews, usability testing, user journey maps, interaction design, wireframes, mockups, and prototypes. Most importantly, we did this based on real cases from our business— not just theory or common examples.

Organizing our work process

Over the past 1.5 years, through various projects, we created a development process combining design methodology and agile software development for new product/services development and to improve existing services.

We established a set of KPIs to help ensure management and operational focus on customer value and design quality.

Our Experience Standard defines the criteria that any of our services should meet before it is made available to customers. Establishing a design culture has helped the team strive to deliver consistently excellent services and experiences.

Finally, we have a project governance model that allows for agile and iterative ways of working, fully integrated with go-to-market processes, ensuring that customer insight/input is the main driver for decision making.

Building infrastructure to become customer-centric

In addition to developing the skills and capabilities of the team, we needed to create a space and tools to support our work. We wanted a design lab, a collaborative, inspiring workspace that supports creative working methods with permanent project zones for service development teams and customer zones for frequent customer interaction and testing. We are now rebuilding our lab with the ecosystem of hardware, creative software, tools, and templates needed to operate effectively.

Read … Learn … Explore …

I was inspired by a team member with an amazing collection of books for the passionate designer. The first book I read was Change by Design: How Design Thinking Transforms Organizations and Inspires Innovation by Tim Brown. The ideas in the books I read have transformed my understanding of design as a creative way of solving a real-world problem.

It’s not just books. There are numerous websites/pages, blogs, university research papers, awards, conferences, boot camps, and other articles and notes that helped me move forward and are keeping me up to date with the latest advancements in this field.

I am fortunate to have come across some great HCI/UX professionals who supported me with their valuable time and mentorship. I am particularly inspired by the eagerness of Bangladeshi HCI/UX professionals, some of them living abroad, to contribute to the development of a UX community in Bangladesh. Like me, they have also struggled to build a strong community and they faced similar challenges.

This journey is just beginning. There are many new areas of design to explore. As a father of a 2-year-old boy, I want to be responsible and contribute in a creative way, to ensure we gift a better world to our future. We can look beyond our immediate projects to explore the technical, social, material, and theoretical challenges of designing technology to support collaborative work and life activities.

[bluebox]

More reading

These sites are often mentioned for UX reading and they were very helpful for me.

[/bluebox]

I must thank the management team of Grameenphone and the design team at Telenor, both of which have supported me tremendously since I took on the role as design lead in March 2016. I also want to thank Mr. Ataur Rahman Chowdhury, currently working as UX consultant at Backbase in the Netherlands, for helping me to build a community from scratch. He opened a whole new world for me by introducing me to a group of HCI/UX researchers, academicians, and professionals living inside Bangladesh and abroad. I will remain in debt to these people for guiding me, and I strongly believe that we have only just started this amazing journey and have a lot of more to achieve. In this journey, Dr. Nova Ahmed, one of my mentors, is guiding me to build this community and helping me to learn more.

Experiencing Trust: Mother
 or Big Brother

Social psychologists tend to think about things like trust a lot. But we are not software designers. When I read what usability experts have to say about trust, the precision with which they talk about how to design for it is impressive. Trust assurances, third-party security seals, and lists of FAQs all seem successfully geared toward (or so evidence suggests), engendering user trust in a website. Software design seems to look at trust from the sharp end of things—with precise ideas and interventions to affect a user’s trust.

It’s sort of discouraging for a social psychologist, because we tend to operate on the other end of the continuum—the blunt end. When your object of study is the human being, one can’t be overly wed to precision, as much as you would like to be. Just the same, it is sometimes helpful to organizations to think at the blunt end of the continuum when developing strategies to promote trust in e-commerce. The lessons of social psychology may not always be specific and detailed, but they are informative about what is going on between the user’s ears, and what features the user brings to the context that affects trust.

In the wake of Facebook’s IPO, one of the great debates that will ultimately affect the trajectory of their stock price is the value of advertising through that interface. Consumers are increasingly asked to trade privacy for convenience in e-commerce, social networking, online banking, and so forth. User trust is a likely part of the key to unlocking that value. Some users see e-commerce and social networking as sort of a “Mother” presence that looks out for them, tries to connect them with other people and products of value to them, and has their best interest at heart. These folks will probably be a lot looser with privacy than users who view social networking sites as “Big Brother.” Big brother keeps an eye on you, but he isn’t looking out for you. You have to watch your back!

So what types of online social interactions shift our perceptions from “Mother” to “Big Brother?” Social psychologist Leon Festinger might have something to say about this.

Cognitive Dissonance

Leon Festinger is credited with proposing the theory of cognitive dissonance (Festinger & Carlsmith, 1959). Cognitive dissonance occurs when a person simultaneously holds two beliefs that are psychologically inconsistent. The early demonstration of dissonance involved getting seventy-one undergraduate males to do a boring task. A really, really boring task. Festinger then paid these guys either $1 or $20 to tell the ostensibly waiting next participant that the task was fun. Keep in mind this was 1959, so in today’s money it was probably more like being paid either $8 versus $160 to lie. Of course, being poor students, they all took the deal.

At a later time, in a different room, Festinger asked the participants to rate how much they enjoyed the boring task. According to the prevailing thinking of the time, the participants who got paid the greater amount to lie should have come to rate the task as more enjoyable—more money, more enjoyment. What Festinger found was the exact opposite—the cheap liars rated the experiment as far more enjoyable than the expensive liars. Why?

Festinger explains this as due to dissonance, as in, you can’t really think “I’m an honest person,” and “I just lied for a very small amount of cash,” at the same time. They are psychologically inconsistent. The expensive liars don’t have this problem. Their large compensation justified the small lie. To resolve the dissonance this situation aroused, the cheap liars shifted their perceived enjoyment of the boring task so as to reduce the inconsistency. If they really enjoyed the experiment on some level, then they didn’t lie!

Since Festinger’s contemporaries were quite skeptical of his findings, he and his colleagues and students spent a lot of time in the laboratory, replicating this sort of finding and looking for the edges of it. One of the factors that they stumbled upon was the importance of free choice in dissonance arousal. Jack Brehm (1956) posed as a marketing researcher and asked several women to rate the attractiveness of eight different appliances. Afterwards, as a reward, each woman was told she could have one of the appliances as a gift, and was offered a choice between two appliances she had rated as being equally attractive. After she made the choice, it was wrapped up and given to her. A few minutes later she was asked to rate the appliances again. Lo and behold, the ratings had changed! The women rated the chosen appliance as significantly more attractive than the rejected appliance in the second round of ratings. Again, why?

Brehm argued that this shift in ratings occurs as a way of justifying the choice they made. The choice was really between two “equal” objects. Thinking about negative aspects of the chosen object was inconsistent with the behavior of having selected it. To resolve this inconsistency, after making the choice, the women focused on the positive aspects of the chosen object and the negative aspects of the rejected object. Freedom of choice created motive inside the women to justify the choice, making it consistent with their broader cognitive landscape.

So what does this have to do with user trust, exactly?

User Trust, Attributions, and Dissonance

Human beings are constantly generating explanations for why they engage in a variety of behaviors. Psychologists call this process “attribution.” Participation in e-commerce is no different. I engage in e-commerce because it is simple and convenient for me, or perhaps for other reasons.

One of the major ways we slice these explanations is into “internal” and “external” attributions. Internal attributions are, obviously, internal. They deal with intrinsic motives—I do something because I want to. External attributions focus more on extrinsic incentives—I do something because I have to. Extrinsic motivation includes trying to avoid some form of punishment, as well as trying to acquire some incentive.

Let’s take online banking as an example. When users have the choice of whether or not to engage in online banking, and they choose to do so, they are likely to attribute this choice to intrinsic motives. They could go to a brick-and-mortar bank bank and have a face-to-face interaction, but they choose to use the online application. For a lot of users, this is the end of the story. They decide they like online banking—it’s way more convenient, saves them time, and it’s simple to do. They do it for themselves. This creates a dynamic exchange with the online banking interface that lays the groundwork for trust to develop. After all, no one is making me use the online application, so I must be using it because I trust it.

But for a whole segment of the population, online banking feels a lot more like something they’ve been forced into. Their bank started charging fees for receiving paper statements, for transactions made with tellers, and so forth. If the fees become exorbitant to the point where users feel “forced” into online banking, they probably are going to do it, but it isn’t going to feel like a free choice. They are going to know exactly why they are doing it—to avoid the penalties their bank is imposing for choosing otherwise. This is an extrinsic motivation, and it doesn’t engender trust.

Trust develops when users attribute the decision to engage in a relationship (for example, online banking or social networking) to their own free choice—just like the women in Brehm’s study. When users feel forced into the relationship, it undermines their trust. Users who are enticed to interact online with too big a stick or carrot are going to attribute their interaction to the punishment or reward they are avoiding or seeking, not to trust. This is sort of a “Big Brother” perception. “Big Mama” uses a lighter touch.

So the key principle of dissonance research in user trust is this: minimal pressure equals maximal trust.

Dynamics and Details

Cognitive dissonance highlights how the dynamics of an interaction influence the development of user trust in an online application. This is the broader contribution social psychology has to make towards understanding user trust. Trust falls squarely between the ears of the user, and just as the brain itself develops, trust also develops over time. It occurs over a temporal continuum, not a dichotomy. Although we can design for user trust with specific, discrete manipulations (presence or absence of security features, differing levels of system reliability, and so forth), trust typically unfolds in an interaction history. Usually, an interaction history builds trust as people slowly disclose more and more to each other, assessing the risk and reward of the relationship as they go. It is a dynamic dance, with both parties assessing each other along the way.

The slow, incremental exchange of information is akin to the notion of free choice in cognitive dissonance. Requests for smaller bits of user’s information aren’t as likely to arouse defenses as large requests for extremely sensitive personal data. If users slowly offer more information over time under relatively little pressure, the chances that they will attribute their participation in the interaction to their own trust in the online application increase. Giving users more ways to engage in the slow exchange of increasingly sensitive information over time should help sustain the dance, and result in greater levels of trust.

One of the challenges in studying trust in online applications is the seductive nature of details—in some ways they are so much easier to focus on. They are precise and their effects are often easier to measure. But this misses the dynamics that undergird the development of trust. Dynamics are messy and unpredictable, but if we ignore them, we risk pushing users for too much, too fast. When you put dynamics and details together, you’ve got a recipe for how an online application can seem a lot more benevolent, evoking “Big Brother” a little less and “Big Mama” a lot more.作者是一位社会心理学家,她详细描述了关于认知失调现象的研究,这种现象表现为实验参与者言语和行为间的不一致。她还描述了内在动机(“因为我想要”)和外在动机(“因为我不得不”)之间的差异。这一切说明如何才能让在线应用程序看起来显得更有善意,少一点“大哥”般的专横独断 ,而多一点“老妈”般的仁慈和善。

文章全文为英文版사회심리학자인 저자는 연구 참여자들의 말과 행동이 서로 일치하지 않는 현상인, 연구에서의 인지 부조화 연구를 상세히 기술한다. 또한, 내적 동기(“내가 원하기 때문에”)와 외적 동기(“내가 해야 하기 때문에”) 간의 차이점을 설명한다. 이 모든 것이 온라인 어플리케이션이 훨씬 더 자비로워 보이게 할수 있는 방법이며, “빅 브라더”(통제)보다는 좀 더 “빅 마마”(보호)를 떠올리게 할 수 있는 방법에 대한 비결이다.

The full article is available only in English.A autora, psicóloga social, detalha seu estudo da dissonância cognitiva na pesquisa, o fenômeno das palavras e ações dos participantes do estudo que não correspondem umas às outras. Ela também descreve a diferença entre motivação interna (“porque eu quero”) e motivação externa (“porque eu preciso”). Tudo acaba em uma receita sobre como um aplicativo on-line pode parecer muito mais benevolente, evocando menos o “Big Brother” e mais o  “Big Mama”.

O artigo completo está disponível somente em inglês.社会心理学者である著者は、調査における認知的不協和の研究、すなわち調査参加者の言行の不一致を詳しく説明している。さらに、内的動機付け(「そうしたいから」)と外的動機付け(「しなければならないから」)の違いについても説明する。これにより、「親分肌の兄貴」的な部分を少しでも控え「母親」的な視点をより多く示すことにより、オンライン申請がより慈愛に満ちたものに見えるようになる理由を明らかにしている。

原文は英語だけになりますLa autora, que es psicóloga social, detalla el estudio que realizó sobre cómo se produce discordancia cognitiva en las investigaciones, el fenómeno de que las palabras de los participantes de un estudio no coincidan con sus las acciones. También describe la diferencia entre la motivación interna (“porque quiero hacerlo”) y la externa (“porque tengo que hacerlo”). Todo concluye en una receta de cómo una aplicación en línea puede parecer más amable y remitir un poco menos a “Gran hermano” y mucho más a “Gran mamá”.

La versión completa de este artículo está sólo disponible en inglés.