Skip to content Skip to sidebar Skip to footer

Absent, Undefined, Undervalued: Usability in Technology Research Firm Software Evaluations

Usability professionals often struggle with how to convince senior IT management of the importance of usability and the user experience on the success of internally-developed software. This struggle remains when management purchases enterprise software (for example, content management systems, and enterprise portals) developed by an outside company.  In these cases, ensuring that usability is a criterion by which software is evaluated is often a challenge because management is typically inundated by sales pitches and the promise of cost-reducing features. So, how does management get their information when evaluating software?

Important players in the IT management decision-making process are technology research firms (such as Gartner, Forrester Research, META Group, AMR Research, Jupiter Research, and Yankee Group) who provide evaluations and comparisons of information technology products,.  These firms are trusted authorities for many organizations and evaluating their reports is often an important part of management due diligence prior to making large software purchases.

Unfortunately, the evaluations produced by these firms may inadequately factor in usability. This may be why even large, sophisticated organizations continue to purchase and implement difficult-to-use software, and why many software companies do not appear to pay enough attention to usability in the development of their products.  This article will explore the treatment of usability in several product evaluations produced by technology research firms, and provide some recommendations for dealing with software evaluation processes that do not adequately address usability.

Overview of Technology Research Firms

Technology research firms provide detailed advice on technology trends, and evaluations of technology products. This advice usually takes the form of consulting, product evaluations, and white papers. Organizations pay fees to access these services, which they use to guide IT strategy and inform IT purchases. The reports and white papers are produced by the firms’ consultants and researchers who typically have experience in industry, consulting, academia or a combination thereof.

I informally investigated a selection of software evaluations and comparisons produced by two major technology research firms to see whether and how they considered usability in their reviews. I focused on reports that compared content management systems, enterprise portals and business process management tools. It is important to note that technology research firms do produce quality reports that specifically address usability in various ways (such as the state of usability in public websites or potential usability concerns with Ajax-based applications). However, I confined my investigation to software product comparisons because it seems likely that these are the types of reports IT management would investigate prior to a purchase. I should also point out that my inquiry was by no means exhaustive and follow-up work on this topic is essential.

Usability in Technology Research Firm Software Evaluations

Unfortunately, factors related to usability only appear occasionally in the software evaluations and comparisons I reviewed, but the issue is more complex than that. In fact, there are four ways in which the reports tended to undervalue usability:

1. Usability was not addressed

Many of the reports I reviewed simply did not factor usability into the evaluation. Take, for example, technology research firms’ use of quadrant charts.  Quadrant charts are succinct information graphics that are often used to assign software products and players into four quadrants according to many qualitative factors.  While these charts are effective from an information visualization standpoint, in the reports I investigated the methodology for assigning players to a particular quadrant did not account for usability. Although the firms do disclose the methodology by which they plot software offerings, thus exposing this deficiency, it seems unlikely that a busy senior manager will take the time to find and read through this material.

2. The definition of usability was misinformed

Not including usability at all in a report is not necessarily the worst type of deficiency I found.  Another issue in technology research firm reports appeared to be the use of an overly limited, misinformed, or otherwise meaningless definition of usability. In one report on business process management tools, the closest things to “usability” were factors called “human-centric features” and “human interaction,” which were defined as how well the products support “human activity” and, how well the products support various kinds of “human interaction,” respectively. The report then provided the products a score on these dimensions.

Unfortunately, these definitions, even for a practicing usability engineer, are cryptic and fairly meaningless, and therefore any scores assigned to them are not useful. Without a meaningful definition for what constitutes “usability,” this report might do more harm than good: Managers who value usability might infer that usability has been appropriately factored into the software comparison, when this is not the case.

3. The usability score may have been inaccurate

Some reports I reviewed did account for usability, though it was hard to discern how the usability of a product was measured. Take, for example, the impetus for this inquiry. This investigation began after a company I worked for purchased and deployed enterprise portal software. After struggling with poor user adoption of the system, the usability team was brought in to deal with a number of major, obvious usability issues in the product. I was curious as to what technology research firms had to say about this product, especially with respect to usability, because it was difficult to imagine this product scoring well in any usability-related category.

What I found surprised me. In one report, out of 13 systems the technology research firm evaluated, the product my company settled upon shared the highest possible score in usability along with five other products.   Notwithstanding the odds of nearly half of the 13 portal products under review scoring perfect marks for usability, how was it possible that a product that my team had documented major usability issues in could have scored so high in usability? Unfortunately, the method by which the technology research firm derived the usability score was not published, but it is hard to imagine the scores were derived via formal usability studies.

4. The weight usability played in a recommendation was minimal

Another way in which reports I reviewed were problematic were cases where usability did not appear to count for much in the overall evaluation of a product.  For example, in a report on enterprise content management systems, the “user experience” was a factor in developing scores for the products under review, but it was only one of 43 factors contributing to that score. More importantly, however, the weight of the user experience factor in this report was low in comparison to the other factors and its value amounted to less than 1% of the overall score. Of course, there are sometimes business considerations that necessarily outweigh usability considerations and the importance of usability varies by software type and the end users who will interact with the software. Still, in those cases where usability may be important to the effectiveness or success of a software product, usability should have more impact on the overall score a product receives.

Potential Implications

It is curious that technology research firms produce high quality reports on the topic of usability and many have well-qualified usability practitioners on staff, yet usability did not seem to significantly factor into the product evaluations I reviewed.  If my observations hold true across technology research firm reports in general, the deficiencies I cite may perpetuate the purchase and production of poor quality software in at least two ways. Firstly, recommendations put forth by technology research firms may not be accurate in terms of determining what technology products are better than others for an enterprise if indeed usability is a critically important factor in a product’s success.  Secondly, and more significantly to the software industry at large, if nobody appears to care about usability when evaluating or purchasing software, it seems unlikely that software manufacturers will devote significant resources to alleviating usability issues in their products. The net result of both of these issues is likely one reason vendors continue to produce difficult to use software that organizations continue to purchase and implement.

Recommendations

Usability professionals who wish to respond to the treatment of usability in product comparisons produced by technology research firms should:

  • First and foremost, talk to management about their decision-making process for purchasing software.  Ask to be part of the evaluation and purchasing process, and inject usability where appropriate.
  • In the event that there is not an evaluation process in place, work with management to create one and make sure usability as an important factor in it.
  • Ask management if they use product evaluations published by technology research firms to help them make decisions. If they do, ask to see the reports and determine the treatment of usability in them. If usability is not adequately addressed in these reports, discuss this with management and offer to provide that evaluation.
  • Ask vendors how they address usability in the development of their products. In the event that a vendor claims they follow user-centered design principles and they user-test their products, ask to see their research methodology and test results (preferably in Common Industry Format, which was described by Parush and Morse in their article in UPA Voice from January 2003).

Conclusion

Usability practitioners should work to understand the process by which their management decides upon enterprise software. In the event that software purchasing decisions are informed by the reports of technology research firms, it is important for usability practitioners to read and respond to those reports in order to ensure usability is a key factor in the decision-making process where appropriate.