It can be really hard to motivate people to do research, and even harder to persuade them to do the right research.
In a world of scarce resources (also known as the “real world” in which most of us actually work), there is rarely enough time for everyone to cheerfully do whatever research they want and then put the pieces together. Instead, we UX professionals can find ourselves fighting with our colleagues in neighboring disciplines over whose preferred methods are better and more valid. This can lead to situations in which the competing disciplines succeed only in discrediting each other.
Old-school market research-types, of course, can drive us UX professionals crazy by relying on attitudinal research—asking people what they like and whether they are satisfied—and sometimes arguing among themselves about quantitative and qualitative approaches.
Then again, engineers and data-science-types tend to expect behavioral data to be all that matters. There certainly is no doubt that, especially with web-based experiences, it gets easier and easier to amass detailed data on user behavior. Combined with rapidly developing tools and training for visualizing and analyzing data, these methods can yield amazing insights, along with the potential to deliver findings nearly instantly.
Finally, many UX professionals have learned the value of directly observing people’s ability to do tasks, with a rich variety of usability methods. In my experience, it’s also true that we tend to be wary of other methods. Too often, we get agreement to do a usability study, only to find a well-intentioned marketing colleague organizing a focus group instead, without realizing the difference.
My point is that this is a futile debate because no single method accurately paints the full picture.
The ABC’s of Research
So, how do you help people understand the value of doing different types of research?
In thinking of the problem, I like to sketch an equilateral triangle and label the corners A (Attitude), B (Behavior), and C (comprehension), shown in Figure 1.
- Attitude – How users feel, either before, during, or after an experience, learned by asking people to talk about it, most often using methods such as surveys or focus groups.
- Behavior – What users actually do in real life (not in a lab or a focus group facility), usually measured and analyzed as statistical data, through completion/conversion rates, usage over time, bounce rates, etc. Google Analytics is an good example.
- Comprehension – How users think about tasks and whether they “get” how to do them, collected through direct observations in the form of usability tests, ethnography, etc.
Each set of methods, A, B, or C, will suggest how users might respond using the others, but you can’t presume that positive results with any one set of methods are predictive for ultimate success. For example, poor comprehension (C) will rarely result in satisfaction. Yet, great comprehension doesn’t in itself predict that people will be satisfied, because they may not care about the product (A). Even if they understand and love the idea of the product, they may still not use it or buy it, for any number of reasons (B).
I like the metaphor my equilateral triangle suggests, with the three “congruent” angles suggesting ultimate agreement or harmony between the A, B, and C methods. The findings won’t be the same, but they will support and balance each other.
This simple doodle can facilitate a conversation. Often, colleagues from other disciplines will start proposing examples from their experience and offering testable hypotheses about the current project. The humble triangle seems to work well to open people’s minds, and to reassure them that methods other than their own are complementary tools, not threats.
Putting the ABC’s to Work
The conversation generally goes something like this, all the while referring back to the triangle sketch:
Let’s say a B measure shows surprising under-usage of a feature that was expected to be important. Is this because people don’t want it (A)? Or, is this happening because it’s not noticed, hard-to-use, or misunderstood (C)? We can quickly survey people about it, but theiropinions often seem at variance with their behavior. We can do a quick lab study to see if there’s a usability problem, but perhaps that will confirm that they can perform the task just fine if asked to do so while being observed.
Here’s the magic: if we do apply all three approaches regularly, we’ll get a coherent, (metaphorically, “congruent”) explanation. In this abstract example, the original instinct of the B types—just scrap the feature, no one uses it—might be totally correct.
Then again, there might be a big, but fixable, usability mistake somewhere in a key step, or maybe in the name of the feature, or whatever. So, we test this hypothesis with a C method, for example usability testing. Let’s say that the usability test shows that target users find the feature and use it with ease.
Now what? We hypothesize that there is a problem that is best found with an A method. We ask them about it, using a survey or a focus group. If they say it’s a great feature (a seeming contradiction, since they’re not using it), we don’t have to give up. Next, we might ask them if they’ve used it yet and, if not, why not. Perhaps this feature has a very high perceived value, and people would be disturbed to see it go away. Or, perhaps on reflection, it’s just the kind of thing people figure someone else must want, and it really is of low value. Either way, we now know what’s going on.
So, when stuck because of inconclusive findings from one method, we form hypotheses to test using the other methods. For instance, we often don’t know what to make of measures like abandonment or bounce rates. After all, one of the best ways to increase your bounce rate is to put a widely sought-after piece of information on the home or landing page. Delighted visitors get the information they thought they would have to hunt for—and then “abandon” you. Of course, this doesn’t mean you can assume that a high bounce rate means the landing page is excellent; it may in fact mean that the experience is awful, or at least not useful, until you test it with other methods.
Back in 1997, I was involved with the first web efforts of a major catalogue retailer, the kind whose customers are very loyal. They observed that roughly 80 percent of their (still modest) online sales came from a user entering a catalogue number in a box on the home page, presumably copying it from the paper catalogue, and then checking out with that one item (B). So we asked telephone representatives to ask people if they had visited the site, and whether they liked it (A). Customers said that they liked the site but didn’t like shopping there. Now, we were getting frustrated. We had data from A and B methods, but still weren’t sure why users were only entering a catalogue number rather than browsing the site and putting multiple items in their basket. With the hypothesis that the problem must be with comprehension, we did usability tests and found that the site, which was organized using exactly the same categories as in the beloved catalogue, in fact made it difficult to find things (C). Why?
The usability test revealed that the site structure was seriously flawed. The catalogue was designed for browsing, and was organized around branded lines of clothing—names that many customers recognized in context but virtually no one knew in a text list. Remember, in 1997, more than two-thirds of customers were using dial-up modems, so browsing around or flicking through carousels were not options. Every mistaken click was a descent into online hell.
We reorganized the site using what is now a familiar structure: Men’s > Jackets > Winter > Mountain Anorak. Without making any other changes, basket size increased, the proportion of people entering catalogue numbers dropped to 20 percent, and sales doubled (B). The next day.
Since 80 percent of web customers initially entered the catalogue number, you can be sure that the client (a very smart and thoughtful group, by the way) wanted us to do the obvious: make the catalogue-number box bigger and more prominent. Until we did the usability test, they also insisted that the site continue to be organized like the catalogue. The behavioral data told the truth, but not the whole story.
The ABC triangle serves, first and foremost, to help create a comfortable and respectful conversation about how to most-effectively do what kind of user research. This works both at the beginning of a project, during the planning and budgeting stages, and also while the project is underway. By simply thinking about the question, the choice of research method becomes more obvious.
In a real world of limited resources, we need the flexibility to mix things up, doing what makes sense given the questions and the constraints. Instead of ending with an impasse, bring it back to a simple discussion of ABCs.
Retrieved from http://uxpamagazine.org/choosing-the-right-research-method/