The traditional divide between market research and user experience research leads to at least two problems for researchers and designers:
- Market research misses the critical component of understanding users in the context of their experiences with products.
- User experience research misses valuable opportunities to learn about marketing effectiveness during product evaluations.
As large-scale, consumer-focused companies and interactive web design agencies continue making efforts to close this divide, usability professionals must employ new methods for learning about consumers, and new ways to blend research techniques across disciplines. This article focuses on a multi-method approach to measuring the quality of web initiatives that draws upon the techniques of both traditional market researchers and usability professionals, and allows for both disciplines to partner in measuring online effectiveness.
How do we effectively measure online marketing quality? It’s not just looking at web logs and increasing site traffic. It’s not about measuring task completion and error rates. And it’s not about guessing what will be effective, or using intuition to make major development decisions. Measuring quality means knowing definitively that your online product will meet business objectives. This can be done using a variety of research techniques. We do it through a rigorous process that measures the success of client initiatives against strategic goals at various stages in the development, before the product ever goes live.
In order to effectively measure quality, researchers must clearly define business objectives and find the right tools to measure against those objectives. Below are seven steps that can take marketing initiatives from good to great. If your client uses methodologies such as TQM or Six Sigma, this approach may seem familiar since it is loosely based on the DMAIC model (define, measure, analyze, improve, control).
- Define success and quality metrics
- Develop a benchmark
- Determine drivers of success
- Set success and quality targets
- Measure quality
- Improve prior to launch
- Launch and control
The seven steps to getting from good to great map closely to the website development process (see Figure 1). It’s similar to the traditional product development process for manufacturers. We start with concept and planning, move on to design and specifications, and finally move into production and launch. This development process may vary slightly from agency to agency and company to company. Companies using the agile development process may have more truncated versions. The point is that the development of every online product or website should follow a defined and qualified process, and that process should be the basis of your quality measurement program.
Step 1: Define Quality Metrics
Defining quality metrics starts very early in your development process—ideally at the same time you are identifying the success measures and defining requirements. When measuring the quality of your site, establish two tiers of metrics: business impact metrics and experience metrics. Business impact metrics measure your overall, highest-level goals for your site. In all likelihood, you’re looking at brand affinity, conversion, and retention. You may have others. Experience metrics focus on the individual elements of a website that most drive your strategic goals. They should be customized very closely to your industry. For example, an automotive manufacturer may set the following quality metrics (among others) for its consumer-facing website:
Business impact metrics
- Consideration (likelihood to connect with a dealer, request a quote, or test drive)
- Overall Brand Opinion (positive/negative)
- Brand Attribute Ratings (such as environmental friendliness, safety, quality, and fuel economy)
- Overall satisfaction
- Success in finding information sought
- Individual satisfaction ratings for various site tools, content, and features
- Ease of completing critical tasks (such as understanding optional and standard features, comparing vehicles)
- Specific problems and frustrations encountered
Step 2: Develop a Benchmark
Prior to starting site development, set a benchmark in order to understand how your current website is performing and to learn where to focus efforts for new development. This type of benchmarking against your quality metrics requires a blended approach of traditional market research and usability techniques.
The blended approach measures the site’s effectiveness in the context of users performing basic tasks in order to uncover the user experience issues (which are typical goals of usability studies), while simultaneously using large samples of participants in order to measure attitudes and perceptions (the typical goals of traditional market research). This baseline evaluation of the website experience should include no fewer than 200 target users, should collect their thoughts, attitudes, and behaviors, and should allow for pre- and post-measurement of the impact on the website.
There are multiple tools on the market to enable this level of benchmarking: Keynote’s WebEffective, Usability Sciences’ WebIQ, RelevantView’s ActiveSandbox, and UserZoom. All allow you to send a large sample of your target consumers to a website to complete a variety of tasks while tracking where users go, their intent, and the impact of the site on user perception and future behaviors. In addition to collecting reliable user perception ratings through a robust market research questionnaire with up to 100 data points, you can diagnose issues with detailed clickstream analysis. You can recruit participants with an onsite intercept, or through a panel (such as SSI, GMI, e-Rewards, or Greenfield).
This method of benchmarking offers a baseline understanding of the website’s strengths and areas of opportunity, along with key numbers against which to measure ongoing improvement. The data can also be used to isolate site experience drivers of overall success.
Step 3: Determine Drivers
You can measure every corner of your site, but what really matters? Which aspects of your website experience have the greatest overall impact on brand and conversion? A detailed statistical driver analysis will tell you what aspects of your users’ website experience most drive your company’s strategic goals. The result should be a list of five to ten areas that have the most impact on your success. For example, a tax software provider may learn that online comparisons and consumers’ experience with customer support have the greatest impact on eventual purchase decision—more so than website demos and customer testimonials. By utilizing a driver analysis, a traditional statistical technique employed by market researchers, you can focus development efforts on the experiences that best support business goals.
Using the benchmark data gathered in Step 2, start the driver analysis by developing individual indices for brand, conversion, and overall satisfaction, using multiple metrics that represent a slightly different aspect of each construct. For example, the conversion index would consist of multiple related metrics such as likelihood to purchase, likelihood to recommend, and likelihood to return. Then put all of the specific quantitative questions assessing various aspects of the customer experience (and that are not part of the impact indices) into a principle components analysis to see which aspects of the site experience have the most impact on brand, conversion, and overall satisfaction. The end result should be a clear picture of which elements of the site drive business goals.
Step 4: Set Quality Targets
Based on your initial driver results, set quality targets or percentages for the business impact and experience metrics most important to your company. Do not set quality targets in a vacuum. Consider conducting similar benchmark studies to learn how your competitors perform on your key quality metrics. Looking at competitors’ website experiences allow you to: 1) learn best practices, 2) uncover failures to avoid, and 3) set realistic quality goals for your own website. For example, a major clothing brand may discover that while the site has a positive impact on purchase consideration for site visitors, their key competitor is twice as likely to convert prospects. Similarly, the brand may learn that, compared to competitors, their site has significantly higher failure rates in the product comparison tool. You may also set quality targets by benchmarking various sites within your (or your client’s) company, or by setting targets similar to those in the Six Sigma methodology.
Step 5: Measure Quality
Measuring quality consistently throughout the development process is the most important step. Research should occur at least once prior to launch and preferably at the concept stage, prototyping stage, and beta testing stage of development. These user experience research methods are most familiar to usability professionals. However, when evaluating against key quality metrics, quantitative market research techniques may also be employed.
For example, designers often must make a decision between multiple homepage prototype designs. Relying simply on the usability of the pages will not adequately ensure that the other key business objectives, such as brand impression, are supported. In this situation, a quantitative comparative evaluation of the three designs will ensure that the final winning design not only meets usability targets, but also supports business impact metrics. Consumers can rate each page against key brand attributes.
Typical quantitative methods for evaluating prototypes involve using large samples of target users in a variety of research methods, such as head-to-head comparisons of concepts, look-and-feel screenshots, or different designs. In these studies researchers can measure the effectiveness of the designs against usability standards, while using larger samples to measure user impressions and attitudes with much more confidence than in one-on-one usability tests alone.
Step 6: Improve
After each measurement of quality, it’s essential to improve the website based on the input. Prioritize improvements based on the key drivers determined in Step 3. After making improvements, the team should again test designs against the quality targets to ensure success. Without defined drivers and targets to influence the improvement process, precious resources could be spent on the wrong features.
Step 7: Launch and Control
Once a website has reached the end of its development cycle, the team can use quality targets to make a go/no-go decision on the launch. The new designs should at least meet targets for major drivers of overall experience. Once launch has occurred, product managers should set regular intervals for benchmarking the website performance to ensure high quality over time. This can be done quarterly, or on a schedule in alignment with updates of new content.
In order to initiate the process of taking your website from good to great, first develop internal cross-functional support for the multi-method quality measurement program. Solicit the feedback and insights from market research counterparts for key stages where their expertise will aid in the process.
- Achieve cross-functional agreement on key business impact and experience metrics.
- Map a research plan for measuring quality against the website development process.
- Allocate budget to support various research inputs.
- Identify the most appropriate measurement tools based on your company’s unique needs.
If your company lacks the appropriate internal resources to develop a benchmarking program, external consulting firms who specialize in customer experience management can also aid in getting your program up and running.
Some Marketing Tools Defined
Brand metrics: Target customers’ perceptions of a company and its product. Examples are utility, performance, financial value, and durability.
Conversion metrics: Reflect the percentage of users who either state intent to complete key calls to action or eventually do complete the calls to action.
Key driver analysis : Used to understand which site experiences or brand perceptions contribute most heavily to a customer’s decision to buy. Examples are high satisfaction with a comparison tool, or general perceptions of the product as high performing after reviewing the website.
Overall satisfaction indices: Usually refer to a tally of customers’ self-reporting on subjective scales, such as a scoring from “very satisfied (7)” to “not at all satisfied (1).”
Principal components analysis (PCA): A statistical technique for reducing complex data to simpler dimensions by focusing on one data set with the largest variance.
Retrieved from http://uxpamagazine.org/good_to_great/