Top

Usability Testing Versus Expert Reviews

Ask UXmatters

Get expert answers

A column by Janet M. Six
October 19, 2009

In this Ask UXmatters column—which is the first in a series of three columns focusing on usability—our experts discuss the use of usability testing versus expert reviews. In the upcoming columns, we’ll discuss what usability techniques to use when money or time is tight and how to best conduct remote usability testing.

Look to Ask UXmatters for answers to your questions about user experience matters. If you’d like to see our experts’ responses to your own question in an upcoming edition of Ask UXmatters, please send your question to: [email protected].

Q: Under what circumstances is it more appropriate to do usability testing versus an expert review? What are the benefits and weaknesses of each method that make one or the other more appropriate in different situations?—from a UXmatters reader.

Champion Advertisement
Continue Reading…

The following experts have contributed answers to this question:

  • Todd Follansbee—User Experience Architect and Lead Consultant at Web Marketing Resources
  • Mike Hughes—User Assistance Architect at IBM Internet Security Systems; UXmatters columnist
  • Tobias Komischke—Director of User Experience at Infragistics
  • Stephanie Rosenbaum—CEO of pioneering UX consultancy Tec-Ed, Inc., and a charter member of UPA
  • Jim Ross—Principal of Design Research at Electronic Ink
  • Whitney Quesenbery—Principal Consultant at Whitney Interactive Design; Past-President, Usability Professionals’ Association (UPA); Fellow, Society for Technical Communications (STC); and UXmatters columnist
  • Paul Sherman—Principal at Sherman Group User Experience; Vice President of Usability Professionals’ Association; UXmatters columnist
  • Kyle Soucy—Founding Principal at Usable Interface
  • Daniel Szuc—Principal Usability Consultant at Apogee Usability Asia; Founding Member and President of the UPA China Hong Kong Branch

Determining Your Approach to Usability Evaluation

“Plan for this up front with your client,” suggests Daniel. “The main drivers for deciding when to use any research approach include the following variables—to name just a few:

  • time
  • budget
  • stage in development
  • how quickly the team needs results
  • functions you are evaluating
  • what you and the product team want to find out
  • I’d be interested in hearing what your drivers are…

“The best approach also depends on your own standing on a team—whether they view you as the expert—and how well you can back up your findings from an expert review. For example, are you basing a design change on your own opinion, years of experience in their business domain, years of experience in usability or user experience, customer knowledge, user interface best practices, design patterns, or competitive analyses? Can you map a design change to projections on how it can help the business?

“We should view all user research methods as ways to help us make ongoing product enhancements and feel free to mix and match methods as needed. Unfortunately, too many still see usability testing as a final stamp of approval or the sole opportunity to hear from users or embed usability into the development process. We have to find ways to change this perception in industry.”

“We need to remember that expert review is a user-free method,” replies Stephanie. “Regardless of the evaluators’ skill and experience, they remain surrogate users—expert evaluators who emulate users—and not typical users. The results of expert review are not actual, primary user data and should lead to—not replace—user research.

“Real users always surprise us. They often have problems we don’t expect, and they sometimes breeze through where we expect them to bog down. Also, expert review rarely emulates all the key audience groups, and it doesn’t tell us which problems users will encounter most often.

“Another concern about expert review is political: Development and marketing teams often have strong design opinions. The results of an expert review can sound like just another opinion to them—although our UX perspective can help us be a tie-breaker.

“In contrast, the behavioral data—often including metrics—of usability testing is reassuring to corporate managers, especially in engineering-driven organizations. Usability testing also has a strong psychological benefit for the observers. If developers can watch people having problems using their site or product, this experience is often more convincing than the opinions of UX professionals, however similar.

“In an ongoing usability-evaluation program, the best approach could consist of two phases: expert review followed by usability testing. Expert review harvests the low-hanging fruit. We can make improvements immediately, so our test participants don’t spend half their sessions struggling with the same obvious usability problems. Plus, these problems can mask other equally important issues that we would have found if we had already addressed the problems we identified during an expert review.

“Here are some things to consider when choosing between these methods:

  • schedule—If your time is extremely limited and you have experts readily available, expert review can yield faster results than usability testing.
  • budget—Similarly, a modest expert review costs less than a modest usability test.
  • impact and validity—If no other user research will take place before a product’s release, conducting only an expert review puts you in a risky situation. If you don’t test with users before release, you will be testing with customers after release.
  • corporate culture—How much does your organization value expert advice? Will they act on it? If you’re in doubt, make a greater effort to find the time and budget for usability testing.”

Complementary Approaches to Usability Evaluation

“An expert review—sometimes called a usability review—is a nice way for you to get familiar with a product, a business, and its strategy and to identify both positives and opportunities for improvement you could later explore further through other research like usability testing,” explains Daniel. “I find expert reviews are excellent for identifying user-interface roadblocks that may be interfering with the overall user experience. However, expert reviews do not involve users, which is a disadvantage at times. They are not necessarily good at getting into a user’s mind or understanding whether what you’re presenting matches a real user need or the product is the right one to start with. This is where usability testing can help.

“Usability testing’s main advantage is hearing the voice of the customer—quotations, frustrations, sighs, needs, and suggestions for improvements. You can better understand whether your product meets a user’s expectations through usability testing. Unfortunately, some people think usability testing is something that is done at the end, on a finished product, when you should embrace it right along with design and development—always testing your assumptions and seeing how you can enhance a product along the way.

“You should use what you have learned during your expert review as input for your usability testing. I often see situations where usability professionals test parts of a product with users that an expert review based on best practice could better inform. This is not a good use of the users’ time. We should value time with users as gold. So it is better to address strategic user-interface roadblocks or framework issues during usability testing with users and address granular user-interface issues with an expert review. We don’t have to test everything with users.”

“Usability testing and expert reviews are both helpful and tend to find different issues,” responds Jim. “In the ideal situation, they would both be used for the most comprehensive analysis of the usability of a user interface.

“Expert reviews are especially useful for finding violations of usability standards and best practices. These are often obvious problems that may or may not cause problems during usability testing. For many of these types of issues, usability testing is not necessary to find them or to confidently say that they are a problem. For example, inconsistencies between links and page titles, underlined text that is not a link, difficult to read text, contrast issues, and accessibility issues are a few of the problems that an expert review can easily find. An expert review can be more thorough and evaluate more parts of a user interface than in usability testing, finding a greater number of problems, because testing is usually limited in time and scope, focusing on certain tasks and parts of an interface.

“Usability testing is better suited to finding the big issues—problems that affect users trying to perform tasks. Even the best usability expert is not part of the target audience for an interface, so he or she cannot predict all of the problems users will face. In addition to the obvious problems that an expert review finds, there are often issues that a usability expert can only assume might cause problems. These types of issues require usability testing to determine whether they are actually problems for users. Plus, there are always problems that a usability expert—not being part of the target user group—cannot find. Usability testing almost always reveals new insights that no one from a project team ever considered.

“Before doing usability testing, it is helpful to do at least an informal expert review to determine what to focus on during testing. You can do an expert review to find the obvious problems, allowing usability testing to find and validate the more important problems.”

Mike advocates usability testing and says, “Usability testing provides more authentic data than does an expert review.” However, he also sees usability testing and expert review as complementary approaches. “You are seeing users interpret a user interface in the context of trying to solve a real problem or conduct a typical task. Expert reviews are more appropriate for an initial assessment of how well a design or design team is following the accepted best practices of user-interface design. I recommend doing an expert review early in a project, when the management question is essentially In general, how well does it look like we’re doing? Later in a project, I consider usability testing to be the best way to answer the question How effective is a particular user interface in supplying a successful and satisfying user experience for a specific context?

Heuristic Evaluations Versus Expert Reviews

“The benefits of heuristic evaluations cannot be denied,” asserts Kyle. “My eyes were really opened to their benefits after participating in one of Rolf Molich’s Comparative Usability Evaluations (CUE) on the Ikea Web site. During this study, the person who actually found the most usability problems was the one who conducted a heuristic evaluation.

“When I first started out as a usability professional, I didn’t put much stock in heuristic evaluations. I thought it was pompous for us to assume we knew better than users what was usable. I guess that means I didn’t put much stock in us as usability professionals. But, now, I believe experience has proven that—even without seeing the different perspectives of users and sometimes having limited domain knowledge—we can evaluate a system and provide actionable recommendations toward better usability. That being said, I still don’t believe a heuristic evaluation is a replacement for usability testing.

“We need to ask ourselves this important question: Should we consider something a usability problem only if it violates a usability principle? Personally, I don’t think so. There are times, when I’m evaluating a user interface, that I run across a problem that doesn’t violate any of the known heuristics. So, we can’t solely rely on heuristics to tell us if something is or is not usable.

“Many have criticized heuristic evaluations and other methods of usability evaluation for not providing reproducible results. For example, two people evaluating the same user interface can—and typically do—have different results. The question is Does this matter? Some people want a longer and more explicit list of heuristics, so we can try to codify usability evaluation and make it easier for anyone to review an interface and provide the same diagnosis. Essentially, this would try to turn what we do from a craft into a science. Although a lofty goal, I don’t think this is possible. I’m all for making usability more scientific, but I don’t believe defining more heuristics is the way to do it.

“What we do is a craft, and our own unique perspectives and experiences are what make our evaluations insightful and valuable. For this reason, it does make a difference who you hire. I think the fact that our work is not reproducible is a good reminder that we always need more than one pair of eyes evaluating our designs. No matter what method you use, you cannot achieve usability when working in a vacuum.

“I prefer to conduct expert reviews rather than heuristic evaluations for three reasons:

  1. A system can comply with all of the heuristics and still be unusable.
  2. I don’t think it’s possible to list all of the heuristics, especially as our products become increasingly more complex.
  3. Heuristics are not client friendly.

“Some UX professionals tend to have a poor habit of speaking their own language rather than the language of business. I constantly see heuristic evaluations that label problems by the heuristics a user interface violates rather than expressing problems in easily understandable terms. For this reason, if you’re going to conduct heuristic evaluations, I’m a big fan of creating your own heuristics that are specific to your product and audience.”

When Do You Really Need Usability Testing?

Paul recommends doing “usability testing in these situations:

  • when you absolutely, positively have to get the workflow right
  • when others in your organization—the designer, the product manager, development, management—need convincing
  • when you don’t have a complete, detailed, and validated user model

“In other words, in most situations. The biggest benefit of usability testing is that it takes design criticism out of the realm of opinion and puts it into the realm of data. When a product team treats a usability test as a group activity—observing and discussing the problems users encounter—it facilitates the attainment of a shared vision among the team members.”

Despite the value Kyle sees in expert reviews, she agrees, “I prefer to conduct testing rather than evaluations for one reason: Not because the results are better, but because, without testing, it is always just our opinion. My clients can argue with me, but they can’t argue with their users and customers.”

Testing to Assess Users’ Emotional Response

“If you have not yet had a usability expert directly involved in a project, start with an expert review,” suggests Todd. “If a site does not meet usability guidelines—and it is highly unlikely that it would meet guidelines without a UX expert in the mix—direct usability testing would likely just reveal barriers, and you would not have the chance to explore the deeper persuasion issues that testing can show. An expert review would reveal these same barriers for far less cost.

“At their simplest, sites evolve from being functional—the links, shopping cart, or forms all work, and a site functions in most browsers—to usable—meaning a site meets a high percentage of usability guidelines—to persuasive—a site’s compelling content motivates sales, consensus, referrals, and more. Since buying is primarily an emotional decision, based loosely upon facts, to move a site from being very usable to being persuasive, you need to do usability testing to understand users’ emotional response and judge a site’s persuasiveness. I am quick to add that I don’t know is this belief is widely accepted among usability professionals, and I am not sure it is part of a typical curriculum.

“A usability test can demonstrate—through the eyes of a range of participants—users’ emotional response to the brand, statement of business purpose, graphics, long- and short-term messaging, competitive position, sales path, and more. Good, direct testing focuses on finding solutions, in addition to revealing problems, by soliciting from test participants how they might resolve an issue that arises. Most expert reviews offer only one viewpoint. Another advantage of direct usability testing is that stakeholders seem to respond more strongly to videos of test participants’ frustrations with a friction-laden, unnecessarily long form than a screen shot and description of issues with the same form in an expert review. If it is on the small screen, right or wrong, we tend to accept it as fact.

“In summary, if you have had expert UX input in a site’s development, you would have had an expert review and are ready for direct usability testing. If you have not had expert UX advice, your site is likely to fail—like every site we have tested under those circumstances—to meet basic usability guidelines. Do a less expensive expert review to identify and fix basic problems, then go for the final, persuasive elements with a usability test. I know of no other way to measure the effectiveness of the application of persuasion psychology principles other than through direct usability testing.”

There Is No User-Centered Design Without Users

“If you must do an expert review, you should make sure that your approach includes the perspectives of users,” recommends Whitney. “Otherwise, it may be a good design checklist, but may not get at the real usability issues in the work. For example, look at the research-based guidelines on usability.gov. They give each guideline two ratings: one for the strength of the research evidence supporting the guideline and another for how important it is in enhancing usability—or preventing usability problems. One of the interesting and subtle points about these ratings is that there are some guidelines for which there is a lot of evidence, but that aren’t rated as very important.

“This is not to say that expert opinion is not important. Of course, it is. Especially if that expertise is based on much actual experience—as in Malcolm Gladwell’s Blink and Outliers. But, if you want to understand how someone who is not an insider to the design process would react to a product, you need a way to get the user perspective into your thinking. That’s the real value of usability testing as part of a design process. Even a few test participants will do.

“As Steve Krug says, ‘It’s not rocket surgery.’ What you are after is not just opinions, but insights into how different people would react to and interact with a product. All you need is 5 or 6 people, for 1 hour each, in 1 day of usability testing. Have your team observe the tests, discuss what you saw, and decide how to use your insights to improve the design. If you aren’t sure what something means, repeat your test.

“The biggest myths are that usability testing is hard, users are difficult to reach, you need many participants for a formative—or diagnostic—test, and you need a formal report.

“Now, if you really, really can’t do anything but an expert review, at least let your personas guide it. Use your personas—or other user research—as a lens through which to examine the design. What are users’ top reasons for using an application? What would they do first? Ginny Redish, Dana Chisnell, and Amy Lee did a large project at AARP using this technique—based on a lot of user research. Caroline Jarrett and I have done a presentation on this approach: ‘Conducting a User-Centered Expert Review.’

“What’s really amazing is how well it works. I’ve used it with a room full of local government Web managers with no usability experience. Give them just one persona as a way to look at a site and their view changes—in five minutes. Once they see there’s more than one way to experience a site, a lot of interesting conversations break out. And those conversations are the whole point.”

“I recommend always doing both,” encourages Tobias. “Expert reviews that are performed by specialists, using standards and heuristics, reveal easy-to-catch usability problems in a very cost-efficient way. By easy to catch, I mean things you can spot by checking against guidelines—for example, Does the user interface speak the typical language of the user? Experts cannot be as familiar with a specific context of use as users, so they investigate more on the surface. By cost efficiency, I mean you can do an expert review quickly and don’t have to recruit actual or future users or incentivize them to participate in a test. Also, the analysis and compilation of results is typically not as extensive as for usability tests.

“Usability testing—where you observe users while they try to solve authentic tasks using a system or concept that is under review—gets into much more depth and can reveal more serious problems. Users have mental models, experiences, and expectations that are true and, thus, different from those of the UX professionals who carry out expert reviews. This authenticity makes usability tests more valid and valuable.

“While usability testing is more powerful than expert review, both methods in combination are great, because you first want to discover the low-hanging fruit and get them out of the way. For that, it’s more cost effective to use expert reviews. Then, you can save your precious interaction with real users for discovering the harder-to-find issues. So my answer is this: First, do expert reviews, then do usability tests. This should be possible on every project. There’s no good excuse for not doing both. If someone actually holds a gun to your head and forces you to decide on one, choose usability testing. There’s no user-centered design without users!” 

Resources

Chisnell, Dana. “Testing in the Wild, Seizing Opportunity.” User Interface Engineering, August 12, 2009. Retrieved October 13, 2009.

Gaffney, Gerry, and Daniel Szuc. The Usability Kit. Melbourne: SitePoint, 2006. Retrieved October 13, 2009.

Molich, Rolf. “CUE - Comparative Usability Evaluation by Dialog Design.” DialogDesign ved Rolf Molich. Retrieved October 13, 2009.

Quesenbery, Whitney. “Choosing the Right Usability Technique: Getting the Answers You Need.”PDF User Friendly 2008. Retrieved October 13, 2009.

Quesenbery, Whitney, and Caroline Jarrett. “Conducting a User-Centered Expert Review.”PDF STC Proceedings, June 2, 2007. Retrieved October 13, 2009.

— —. “Letting Participants Choose Their Own Tasks.”PDF STC Summit, 2009. Retrieved October 13, 2009.

Rosenbaum, Stephanie. “Not Just a Hammer: When and How to Employ Multiple Methods in Usability Programs.”PDF UPA 2000 Proceedings, 2000. Retrieved October 13, 2009.

Rubin, Jeffrey, and Dana Chisnell. Handbook of Usability Testing. Indianapolis, IN: Wiley, 2008.

Szuc, Daniel. “Finding Gold in Your User Research Results.” UXmatters, July 6, 2009. Retrieved October 13, 2009.

usability.gov. “Usability Testing Guidelines.” U.S. Department of Health and Human Services. Retrieved October 13, 2009.

Product Manager at Tom Sawyer Software

Dallas/Fort Worth, Texas, USA

Janet M. SixDr. Janet M. Six helps companies design easier-to-use products within their financial, time, and technical constraints. For her research in information visualization, Janet was awarded the University of Texas at Dallas Jonsson School of Engineering Computer Science Dissertation of the Year Award. She was also awarded the prestigious IEEE Dallas Section 2003 Outstanding Young Engineer Award. Her work has appeared in the Journal of Graph Algorithms and Applications and the Kluwer International Series in Engineering and Computer Science. The proceedings of conferences on Graph Drawing, Information Visualization, and Algorithm Engineering and Experiments have also included the results of her research.  Read More

Other Columns by Janet M. Six

Other Articles on Usability Testing

New on UXmatters