Top

Conducting Expert Reviews: What Works Best?

Ask UXmatters

Get expert answers

A column by Janet M. Six
January 27, 2014

In this edition of Ask UXmatters, our experts discuss their views on the best approaches to conducting expert reviews.

Each month in my column Ask UXmatters, our UX experts provide answers to our readers’ questions about a broad range of user experience matters. To get answers to your own questions about UX strategy, design, user research, or any other topic of interest to UX professionals in an upcoming edition of Ask UXmatters, please send your questions to: [email protected].

Champion Advertisement
Continue Reading…

The following experts have contributed answers to this edition of Ask UXmatters:

  • Carol Barnum—Director of User Research and Founding Partner at UX Firm; author of Usability Testing Essentials: Ready, Set… Test!
  • Steve Baty—Principal of Meld Studios; President of IxDA; UXmatters columnist
  • Drew Davidson—VP of Design at ÄKTA
  • Pabini Gabriel-Petit—Senior Director, User Experience and Design at Apttus; Publisher and Editor in Chief, UXmatters; Founding Director of Interaction Design Association (IxDA); UXmatters columnist
  • Steven Hoober—Mobile Interaction Designer and Owner at 4ourth Mobile; author of Designing Mobile Interfaces; UXmatters columnist
  • Tobias Komischke—Director of User Experience, Infragistics
  • David Kozatch—Principal at DIG
  • Cory Lebson—Principal UX Consultant at Lebsontech; President, User Experience Professionals’ Association (UXPA)
  • Itamar Medeiros—Senior User Experience Designer at ROAMWORKS
  • Jim Ross—UX Researcher and Designer; UXmatters columnist
  • Simon White—Head of UX at Voyages-sncf.com

Q: When you do an expert review, do you make a note of every issue that may impact the user experience, just the most pressing issues, or something in between?—from a UXmatters reader

Summarizing Issues

“I think it’s important to note some of the lower-priority issues because many items that seem to have a small impact on their own can add up, giving users a poor overall impression of a user interface,” replies Jim. “But instead of noting each instance of a problem, you can note just one example, indicate that it’s prevalent throughout the user interface, and describe the impact that has overall.”

“I make note of each and every issue,” responds Tobias. “If you bring your car to the shop for a regular inspection, you want to know everything that’s wrong. Not that you’d necessarily fix everything, but you do want to get the full picture. Obviously, if the same problem manifests itself in many different places in a product—for example, a font size that’s too small—I would not call it out every single time I see it, but state the issue once, saying that it appears throughout.”

“We are researchers—of course, we take note of everything!” David exclaims. “The smallest issue could have the largest consequence down the road.”

Ranking and Prioritizing Issues

“When doing an expert review,” answers Itamar, “I believe that we should note every single issue. That said, product managers and development teams usually find long lists of issues overwhelming, and it’s difficult for them to put the impact of each issue into perspective. This is why I rank each issue according to different criteria that I’ve previously explained or discussed with stakeholders. For example, in the case of heuristic evaluations, I usually tell stakeholders that I’ll rank each issue that I find according to a Commonality / Frequency / Severity scale.

“Having had some prior discussion about the ranking criteria for issues can also be very helpful in the prioritization of fixes later on, especially when the team is already using UX metrics of some kind, such as the UXI matrix that Jon Innes has proposed.” Itamar recommended this technique in my Ask UXmatters column “The Best Ways to Prioritize Products and Features.”

“If you report every issue that you see, prioritization is essential,” advises Cory. “As long as you prioritize appropriately, there is no reason to avoid listing anything that you happen to see. Consider prioritizing both the issues from expert reviews and results from research studies into high-, medium-, and low-severity items. (Although I recently had one client ask me to use the word impact instead of severity.) High-severity items are those that would have a big impact on users’ successfully completing a task; medium-severity items, would slow users down or cause some confusion; and low-severity items are issues that are annoying, but do not truly hinder users.

“In some cases, you may also get requests to prioritize items based on how hard it will be for developers to make the changes. Don’t take on this challenge unless you first speak with the developers to learn what would and would not be easy for them to change.”

“It might be helpful for you to base your expert review on a heuristic template such as the one that the Nielsen Norman Group offers,” recommends Drew. “This will give you a good framework to support your review. I always start by making note of every single issue, no matter how small. Then, I prioritize the issues, depending on the business needs and the purpose of the expert review. For example, if conversion is the main metric the business wants to improve, I make sure that I review all of the issues that would impact conversion. If usability is the main metric that you’re looking to improve, prioritize the review of usability issues. I also recommend supporting your review with relevant documents. Examples make your review more actionable—and understandable—for the people who will be making the changes.”

“I also rate the severity or urgency of each finding,” says Tobias, “following Jakob Nielsen’s rating scale for usability problems, but omitting the 0 category:

  • 0 = Not a usability problem at all
  • 1 = Cosmetic problem only—Such an issue need not be fixed unless sufficient time is available on a project.
  • 2 = Minor usability problem—This is a low-priority issue that is less important to fix.
  • 3 = Major usability problem—This is a high-priority issue that it is important to fix.
  • 4 = Usability catastrophe—It is imperative to fix such an issue before releasing a product.

“This rating system—which is similar to that for a car inspection—allows your team to focus on the most important fixes and not go crazy fixing cosmetic issues. Now, having said all of this, when presenting the results to stakeholders, I’ll show only category-4 and category-3 items, because of the limited attention span of my audience. The full list of issues is still a deliverable, but meant for consumption only when more time is available. Follow-up meetings may be necessary to walk through the full list, but usually, in those meetings, no managers or other higher-ups are present.”

Focusing on the Most Egregious Issues

“I used to do expert reviews in a formal way,” answers Steven Hoober, “not only noting every issue, but evaluating every UI widget and interaction, and indicating the good, the neutral, and the bad. But I think your thought process behind your question is right on track. I have cut my evaluations down, over time, to address only the problems. This makes your report much more manageable—something that your clients and developers will actually look at.

“Then, I do include every single bad thing and very clearly indicate its severity. This is pretty easy to judge because, when a user interface fails a heuristic test, you must indicate what the user must do instead. Much like when evaluating technical quality, you can rate how much trouble an issue will cause.”

Striking a Balance

“When conducting an expert review, there is always a balance to be struck between the size of the system or service you’re reviewing and the amount of time you have at your disposal, as well as whether your role includes recommending solutions,” advises Steve Baty. “As I go through the review process, I catalog the following three types of issues:

  1. Fundamental issues that go against known principles of design
  2. Potential issues that I should test with users
  3. Differences in style—that is, things that I would do differently if it were up to me

“The first thing to do is to clarify the audience for the system—that is, the user—and the system’s purpose. It helps the quality of your review if you have a clear sense of these two aspects of the system.”

Fundamental Issues

“These are the most important things for you to point out in an expert review and should be the primary focus of your time and energy,” continues Steve. “If you don’t cover these, you’ve failed to do your job. A first pass through the system should focus on these issues. Take screen shots as you go. These issues would likely crop up no matter who conducted the review—assuming some level of capability and expertise. None of these issues should be just your opinion! You should be able to back up each one with a general UX principle from information architecture, cognitive psychology, interaction design, usability, and so on.

Potential Issues

“Once you’ve identified the major problems, your next focus should be on issues that would potentially be a problem for the system’s users’ getting through their tasks,” recommends Steve. “These are the types of design decisions that could go one way or another—decisions that really get settled only in the hands of users. There should be no ambiguity about the distinction between your list of fundamental issues and these potential issues. The former classification means the system is broken; the latter identifies risks.

Differences in Style

“If you have time, a good source of additional insights that you can provide to your client and their designers is to note any areas that are neither broken nor potentially problematic, but that you would have done differently,” suggests Steve. “Explaining your rationale for the way you would have solved a design problem provides a different perspective on the problem and may help the design team to tackle the issue in a new way next time. This isn’t about convincing them to do things your way, but about offering alternatives as a learning exercise.

“Finally, when illustrating the issues—especially the fundamental flaws—it can be very helpful to either sketch out a quick solution or point to an existing system that does a really good job of solving the design problem.”

Keeping Your Audience in Mind

“The first thing to do is to ask the key stakeholders who are commissioning the expert review what they want,” recommends Cory. “In some cases, they may be looking for the major stumbling blocks and won’t want to be overwhelmed with other non-pressing issues. In other cases, stakeholders may want you to report every issue that you see, because the stakeholders will then decide what they should and should not fix immediately. Your default approach, unless you know otherwise, should most likely be to report whatever you see.”

“Obviously, it’s most important to note the most pressing issues,” advises Jim, “but I usually note some medium-priority and lower-priority issues, too. Of course, the right approach depends on how many problems there are to note. You don’t want to overwhelm your audience with too many nitpicky issues. People have only so much attention they can devote to considering issues, so including too many minor issues can take their attention away from the major issues. Also, you have only so much time in which to conduct an expert review. With some user interfaces, you could go on forever if you noted every issue.”

“The right answer to your question could depend on how thorough you have time to be,” responds Simon. “But the key thing to consider when presenting your review is that you’re not going to get most stakeholders to pay attention to every detail. The most pressing issues are the ones that they will need to address first, so they should be the focus of your review. Anything else is a bonus—for you to keep track of either in the endnotes of your report or in a comprehensive list. A long list of stuff that’s wrong with their application or Web site can scare people.”

Getting Feedback from Multiple People

“When several UX professionals conduct expert reviews to evaluate the same user experience, they often identify different problems,” says Pabini. “The issues that each reviewer discovers depend on the breadth and depth of that UX professional’s expertise. For example, someone who is well-versed in interaction design, information architecture, visual design, and human factors would very likely identify a greater breadth of usability issues than someone who is a specialist in just one aspect of design. Therefore, to ensure an expert review’s more comprehensive coverage of issues, it can be beneficial to have more than one UX professional independently conduct an expert review, then consolidate the findings from all of the reviews.

“Interviewing users or doing a usability study can help you to better understand the issues that you discover when conducting an expert review and to do a better job of prioritizing the issues.”

“When doing user research, conduct multiple interviews,” recommends David. “You can find places where an individual plays up or plays down a particular issue, then follow up with the next person to see whether they agree or disagree. You won’t know what issues will have the greatest impact until you’ve finished the entire process. Make no assumptions going in. Lousy wording in a message box that only a small percentage of users will stumble upon could kill your whole user experience for those people. They may then write in a blog post that the whole user interface stinks—and the next thing you know, people are passing over your product and going someplace else. It could happen.

“Ask participants whether you can audiotape them, create a transcript, then create a word cloud from the transcript. Look for patterns. Later, when your client says, ‘But our users are the experts, and they didn’t care about that!’ show it to them. You can prove that something your client believed to be inconsequential is actually an important issue.”

What to Do After You Complete Your Review

“Once you’ve prioritized the issues, you need to make sure that the most salient, high-priority issues are the most apparent in whatever reporting format you’re using,” suggests Cory. “When given enough time, you can document everything in a report using Word, then create a PowerPoint presentation that calls out the most pressing issues with illustrations. You can refer people viewing the PowerPoint to your full report, where they can see additional details and lower-priority issues. If you are creating a report in only a single format, you could order the issues by priority—or, as I often do, order the issues within major sections of a Web site or application by priority.

“Once a client asked me to do a meta-analysis of my findings from a number of heuristic reviews that I had done for them over the years. These were in Word and/or PowerPoint—each covering a different area of their massive Web site. These all-inclusive reviews covered both the major issues and the minor issues. They wanted to know how many of the recommendations they had taken over time and what issues they had missed or skipped. To present what ended up being over a thousand recommendations, I chose to use Excel, including short snippets about the issues and my recommendations. Although the document included all levels of recommendations, this Excel spreadsheet was both sortable and filterable, so people could filter the issues by severity and by the area of the site that an issue impacted; and—since this was a retrospective—by whether the issue had gotten resolved, had been superseded by other changes, or remained an unresolved issue.”

“You will probably want to provide solutions to the problems that you identify as well,” responds Steven Hoober. “I do, too. But be careful about doing this. Very often, the best solution is not a small, easy, tactical solution, but a more serious change that would solve multiple issues on your list. At the end of the review process, re-evaluate your list of issues to see what solutions you can group. This helps to ensure that anyone estimating the effort necessary to improve the user interface will have a better understanding of what it will take—as well as to prove the value of continual UX involvement.”

Telling the Story

“This topic is near and dear to my heart,” says Carol, “and I have been giving talks on heuristic evaluations at various conferences.” (See Carol’s presentation “How to Tell the Story: The Rhetoric of Heuristic Evaluation” on SlideShare.)

“The reason I think this topic is so important is that I have come to the point, over time, of wondering how much anyone actually does with the results of an expert evaluation—particularly when it is the only usability deliverable a client gets,” continues Carol.

“If you’re conducting an expert review as part of a two-pronged approach—doing an expert review, then following it up with usability testing—the likelihood that your recommendations will get implemented increases. You can use the expert review in either of two ways:

  1. You can point out potential issues that may affect the user experience in actual use; then fix as many of these problems as time and the budget allow; and finally, conduct usability testing.
  2. Your expert review can help usability researchers to identify the potential issues that could adversely affect the user experience. The researchers can then use these potential issues to plan the task scenarios for a usability study and, finally, conduct a usability study to determine whether they are, indeed, issues for users.

“In either scenario, there very likely won’t be sufficient time or resources to fix everything, so as my colleague Steve Krug asserts, you should ‘focus ruthlessly on a small number of the most important problems.’ In Rocket Surgery Made Easy, Steve was speaking specifically of usability testing, but I know that he feels the same about expert reviews. By focusing, you increase the likelihood that the changes will get made.

“The tendency, however, is to produce a long list of problems that—even when you’ve prioritized them by severity—tempts the product team to fix only the low-hanging fruit—typically, those things that are easiest to fix. Often, the thinking is that the most severe problems are too hard to fix, so we’ll get those next time. Even when you group the problems by levels of severity, a list of every problem is just too much to deal with. So the question arises: Why does the typical report from an expert review document every issue so thoroughly?

“When I ask people this question at conferences, I can generally anticipate getting the following answer: ‘Because the client is paying for the report—the only deliverable from an expert review—and the more detailed the report, the greater its price and the stronger the justification for the time and expense it took to produce it.’

“So, what’s the real value of an expert review? To create the motivation to change the product and improve its user experience. And what’s the best way to effect that change? In the evolution of my thinking on this topic, I’ve moved away from documenting every problem and shifted the scope of my deliverable by creating persona-based scenarios and telling the story of the user’s experience. Using a storytelling approach helps decision makers to understand the extent of the problems in dramatic ways that can improve their motivation to fix the biggest problems.” (Read Quesenbery and Brooks’ Storytelling for User Experience and Redish and Chisnell’s “Designing Web Sites for Older Adults.”

“In my experience,” concludes Carol, “I find that, rather than documenting everything, it’s better to focus on the user’s experience through storytelling. The report, which I create in PowerPoint, tells the user’s story, screen by screen, with the user’s comments in callouts or below each screenshot. Where do the comments come from? They come from the expert reviewers’ personal reactions to walking in the user’s shoes. Although expert reviews will never take the place of learning from real users through usability testing, this storytelling approach gets decision makers one step closer to understanding the issues from the user’s point of view.” 

Product Manager at Tom Sawyer Software

Dallas/Fort Worth, Texas, USA

Janet M. SixDr. Janet M. Six helps companies design easier-to-use products within their financial, time, and technical constraints. For her research in information visualization, Janet was awarded the University of Texas at Dallas Jonsson School of Engineering Computer Science Dissertation of the Year Award. She was also awarded the prestigious IEEE Dallas Section 2003 Outstanding Young Engineer Award. Her work has appeared in the Journal of Graph Algorithms and Applications and the Kluwer International Series in Engineering and Computer Science. The proceedings of conferences on Graph Drawing, Information Visualization, and Algorithm Engineering and Experiments have also included the results of her research.  Read More

Other Columns by Janet M. Six

Other Articles on Expert Reviews

New on UXmatters