Measuring Perceptions: Meeting the Challenge of Perceived Quality

Design sells a promise, but the real challenge in the process of design is ensuring that the design’s promise fits the user’s expectations. This is especially true if we talk about products for everyday use. The Experience Design Team at Whirlpool México took the challenge and developed a methodology that is helping the company to understand and measure users’ perceptions of design.

One of our first approaches to understanding users’ perceptions consisted of using semantic cards. Even though this kind of evaluation has worked well in some countries, we found that Mexican participants had difficulty assigning semantic values to products. People tended to distort the meaning of the adjectives used to evaluate design. We also needed a larger sample size and to customize evaluations (according to user target) in order to have trusted results, making the process long lasting and expensive.

We then decided to change to a different approach that better suited Latin America and turned to a methodology already tested in Whirlpool Europe: SEQUAM (Sensorial Quality Assessment Method), developed by Lina Bonapace and Luigi Bandini-Buti at their ergonomics studio in Milan. This approach involves analyzing a product or prototype in terms of the formal properties of its aesthetic elements and then, through empirical studies and structured interviews, linking these properties with product benefits (Bonapace 1999). The basic methodology has three main phases of evaluation:

  • Visual
  • Tactile
  • Use

SEQUAM used a sample of 20 people with individual interviews lasting about two hours, making the overall duration of the test about a week. Even though the information from the test included analysis of rankings and ratings, the core deliverable was the qualitative feedback obtained from users. We wanted perceptions of design, but there was some reluctance to accept the SEQUAM results, as they contained too much qualitative data rather than statistical analysis. In looking at the data, we noticed that it was very difficult to uncover what users meant by the “best quality design.” For example, when faced with a choice of coffee makers, some participants used words to justify their choice like “ease of usse,” while others talked about “durability of materials” or “best performance.” When we started analyzing this behavior, we detected that verbalized perceptions typically corresponded to experience values rather than aesthetic values. That meant users tended to justify what they liked extrinsically (aesthetics) with intrinsic values (those related to the experience of use).

Developing a New Method

After some years of experience using SEQUAM, we saw an opportunity to measure more specifically the experience values of design as perceived by users, while at the same time improving the methodology with scientific rigor demanded by company management.

Our first step was to identify the core experience values associated with the design of household appliances. We found and categorized several values associated with household appliances according main platforms (cooking, laundry and food preservation) We can illustrate them with those that could apply to a small category of household appliances like portables (e.g. coffee makers or toasters):

  • Usability
  • Maintainability
  • Portability
  • Capacity

Identifying these values was the first step in a deeper understanding of user perceptions. Measuring these values helped us to set targets by brand and product. This was especially useful because Mexico has a wide range of users and products that go from low-end to high-end, and from mass market to premium brands.

Experience values would represent the backbone of an evaluation of design perceptions, but we also wanted to look at the possibility of assessing extrinsic values like modernity, technology, surprise, uniqueness, craftsmanship, and so on.

As a second step towards the new methodology, we decided to use an approach similar to SEQUAM, with in-depth user research. The study would simulate user behavior on the sales floor with people seeing our products among their main competitors. This study included three phases:

  • Visual phase captured as First Impact perceptions
  • Tactile phase captured as a Non-Guided Exploration of products
  • Use phase, captured as a Guided Exploration of features among the products by a third part

To add scientific rigor to the methodology, we decided to increase the sample size to 30 people; this number would allow us to create average scores that could be submitted to Analysis of Variance (ANOVA) testing. Further analysis would give us the opportunity to validate and identify differences among means of scores as part of random error or because significant variance with a confidence level of 95% or up to 99%. We also thought that reporting results would highlight scores rather than rankings; rankings would require a larger sample size to validate statistic accuracy, while mean scores would allow us to determine its accuracy (or variance) by use of experimental analysis.

One of the radical changes from SEQUAM was conducting the test with groups of participants instead of individual sessions. To solve the problems of data bias, we followed these actions:

  • Brands and machine capacity claims for products would be hidden.
  • Participants were asked to follow three basic rules: be honest; neither influence nor get influenced by others; and be fun.
  • Participants would record their scores individually using a printed guide with a continuum scales and a clipboard.
  • Small focus groups would capture qualitative feedback during the non-guided exploration phase.
  • Three note takers would assist with recording individual feedback from the guided exploration phase.

The use of groups in evaluation allowed us to reduce the time and cost, and enabled us to focus on a detailed analysis of information, including both the statistics and qualitative comments of users.

Because the test had changed enough that it was no longer a SEQUAM, we decided to call it PQ (Perceived Quality) test.

Case Study: Identifying key attributes in countertop appliances

The design of countertop appliances is a good example on which PQ test could deliver valuable information to identify, measure and drive the design of new elements.

When thinking about a blender, the key elements users pay attention to are overall size, capacity and material of glass, and speed control. These elements are translated to:

  • Portability
  • Capacity
  • Maintainability
  • Robustness
  • Usability

The assessment of key attributes is made after introduction of test and allows users set a mindset in context.

The first data obtained in our test showed where the users’ priorities were when looking at a blender. Figure 1 shows a radar chart depicting a mean of scores obtained from analysis. The chart shows that users pay great attention on capacity of glass but at same time demand good cleanability of product. Robustness follows the importance according users’ priorities while usability and portability recorded lowest values. This doesn’t mean that users don’t care about usability or portability but these two attributes are maybe taken for granted among existing products.

The chart shows scores of Capacity 9.2, Maintainability 8.3, Portability 4.3, Robustness, 8.4, Usability 6.5

Figure 1. Radar chart showing user experience priorities among the participants.

First Impact stage captured user’s perceptions among key identified attributes. Users immediately translated experience expectations on design demands. Figure 2, shows better perceptions on Product A due great capacity and speed controls. Product B, perceived a slightly smaller than product A, but competitive among rest of attributes. On the other hand product C captured low scores due its evident small size but rated good in portability.

Bar chart with data shown in table below

Figure 2. Scores of first Impact Perceptions evaluated at distance

Table 1. Scores of the first Impact Perceptions 

Product A B C
Design 7.3 5.8 4.0
Capacity 6.8 5.9 4.5
Robustness 7.9 7.7 3.1
Maintainability 7.5 7.1 3.8
Usability 6.9 6.6 4.5
Portability 7.2 6.9 7.5

Non-Guided Exploration uncovers users expectations on products attributes. In this case study, Product A, considered best in first impact, falls in perceptions mainly due to the material the jar is made of (polycarbonate, seemingly not noticed at first) while product B, second in first stage, moves to best perception scores. With its  real glass of jar, it was now the unit people “liked” most, as reflected in the design ratings. This change of perception was clearly verbalized among all groups by the time they reviewed the jars, pointing out promptly to “plastic” as material that keeps odors in contrast with glass that is easier to clean and odorless. Product C remained out of best scores of perceptions due the small capacity but still good in good portability.

Bar chart of table below.

Figure 3. Scores of Non-Guided Exploring Perceptions

Table 2. Scores of the Non-Guided Exploring Perceptions

Product A B C
Design 5.4 7.5 4.4
Capacity 6.6 6.5 4.3
Robustness 5.9 7.7 3.6
Maintainability 5.6 7.1 4.1
Usability 6.5 6.6 5.6
Portability 7.1 6.9 7.1

During the final Guided Exploration phase, the test moderator guided participants to review and assess specific parts and features. This exploration included most of the physical features: Case, Controls, Blades, Jar and Lid. This stage does not generally dramatically change the perception of experience values among users, nonetheless it allows researchers to uncover details that lead to better perceptions. This is illustrated well by the way people point out features like the capacity of jar, or the ability to remove blades for better cleanability. This last feature is greatly valuable in Mexico, where consumers are used to having to disassemble everything for deep cleaning.

The test concluded by asking participants to assess their overall perceptions of the products. This assessment represented the final feeling of the users after a deep process of interaction with the products. We used this final evaluation to report how users evolved in their perceptions through each stage of the interaction process of the test (visual-touching-exploring). This evolution is reported in a slide that summarizes the perception of users throughout the test (see Figure 4), showing ratings that improved or dropped.

Bar charts showing all 3 sets of scores

Figure 4. Overview of scores from initial impressions to the final assessment.

Table 3. Scores from the final assessment

Product A B C
Design 5.5 7.5 3.3
Capacity 6.4 7.9 4.1
Robustness 5.5 8.4 4.4
Maintainability 5.6 7.8 3.8
Usability 5.7 7.6 4.4
Portability 7.0 7.3 6.6

Conclusions

The measurement of perceptions has represented a difficult challenge in user experience. We are fully aware that users build their deeply subjective perceptions through long experience, and that these perceptions are very hard to rate and verbalize. Our approach is not a panacea that can undercover what happens in the mind of users during every day of use, but this methodology has worked well by informing key aspects of design that need special attention to fulfill user expectations.

Our use of the PQ test still raises some concerns within the company. During a global meeting with UX leaders, some questioned whether there are too many parameters included for typical users, or if the way we correlate qualitative information with ratings of perception is correct. Every time we prepare a PQ test, I explain that it is an experimental analysis. We need more than one interaction with users to undercover correlations of perceptions to key elements of a design.

As part our best practices, we continually look for ways to improve our own methodology. The PQ test has proved to be both effective and efficient in driving good design decisions, but still needs to evolve through experience. Our responsibility is to be there to supply the best user experience, and the best experience starts with good perceptions.

 

García, A. (2014). Measuring Perceptions: Meeting the Challenge of Perceived Quality. User Experience Magazine, 14(4).
Retrieved from http://uxpamagazine.org/measuring-perceptions/