Top

Testing Your Own Designs Redux

Envision the Future

The role UX professionals play

A column by Paul J. Sherman
December 21, 2009

My last column, “Usability Testing Your Own Designs: Bad Idea?” engendered a lot of discussion with UXmatters readers about some related issues that crop up for many designers throughout their careers:

  • Is it possible to both design and test the usability of our own designs effectively?
  • If so, how can we test our own designs well?
  • If not, then what?

In that column, I took the position that it was entirely possible for designers to test their own designs—with one catch: confirmatory bias would make us less likely to completely discard our faulty designs and start afresh. Then, I provided some guidelines for testing our own designs. When people read the column, it became clear that they had other ideas. This column is an attempt to synthesize a new set of guidelines for testing your own designs that I’ve based on the best of my own and UXmatters readers’ ideas.

Champion Advertisement
Continue Reading…

Revisiting Guidelines for Testing Your Own Designs

Guideline 1—When testing your own designs, don’t think of it as a test to pass or fail, think of it as part of your design process.

Whitney Quesenbery aptly recommends viewing usability testing “…not just as a way of seeing if your design works, but as part of the design process. That’s one of the notions behind iterative design. It’s not just that you create design, then test it, then change it, and so on.”

This is excellent advice. Usability testing should not be a stage gate in your design and development process. It should be a tool with which to gather helpful, diagnostic information from your target users. It’s a means of understanding the goodness of a design’s fit to the intended users’ problems. Put another way—if you’re properly incorporating user performance feedback into your ideation, design, and development process, you should never hear the phrase my design failed usability testing. Which leads me to a corollary for Guideline 1:

Guideline 1a—Test early, test as often as possible, and test lo-fi prototypes rather than making usability testing a make-or-break event in your design lifecycle.

Others’ comments on my absolutist stance that designers should always focus on the negative aspects of their own designs drove the emergence of Guideline 2:

Guideline 2—When testing your own designs, you should seek disconfirming evidence, but be alert for joys and delighters, too.

In my last column, I took the position that the invidious effect of confirmatory bias necessitates our always concentrating on what is wrong with our designs. I admit that I oversold the whole confirmatory bias idea—and as a result, I backed myself into a rhetorical corner. Daniel Szuc was kind enough to throw me a rope and pull me out of that corner when he said, “One could be overly negative about just about anything, but the feedback would not necessarily help the design. Just as one could be overly positive, too.”

Susan Hura took a similar position, saying, “I like Paul’s suggestion of orienting yourself toward negative feedback when you test your own designs, but I think it closes you off to some valuable information in usability testing. Focusing on negative feedback doesn’t help you get to a better design solution, especially if the better solution is one you’ve already considered and rejected.”

Dan and Susan are right. A balanced approach is always better, whether it’s you or someone else testing your design. However, this fact remains: People are pretty much hardwired to give preference to evidence that confirms their assertions, attitudes, and belief systems. A design that you’ve created arises from your belief system and represents a value statement that you’ve rendered into a tangible, manipulable form. It’s your baby. So how do you defend against confirmatory bias when it’s your baby that you’re holding up for criticism?

I don’t have a great answer to that question. It’s extremely difficult for people to be objective and balanced in their criticism of their own designs—especially if you consider that their boss or client is also evaluating them on the quality of their work. As Bob Dylan said, you’ve gotta serve somebody. (I’m not a big Dylan fan, but this just seemed to fit here.)

Indeed, an anonymous commenter on my last column thought I hadn’t gone far enough in describing the pitfalls of testing our own designs, citing organizational and political factors that could affect our ability to test our own designs effectively and impartially, as well as various tricks designers could pull to stack the deck, presenting their designs in the best possible light:

“… [It is] possible that the designer may well bias the plan and the script toward things that the designer has focused more heavily on, and therefore, things that the design is more likely to perform well on.”

I think that’s a tough effect to defend against. Susan Hura recommends taking the Zen approach of a beginner’s mind when testing your own designs. I agree with her and want to add these key points: If you, as designer who is testing your own design, stay focused on the outcome—that is, a good design—instead of merely focusing on how your boss is going to perceive your initial design, and you’ve already done the work of selling management and your peers on the value of usability testing and iterative design, you’ll probably have a better-than-even chance of being able to effectively use usability evaluation techniques to assess your design. Plus, taking a balanced approach, as I’ve recommended in Guideline 2, keeps you attuned to recognizing the delighters—the Wow, cool! moments—your design may engender.

Deborah Mayhew wrote eloquently about the benefits of designers’ being more sensitive to specific concerns and questions: “Personally, I believe there are distinct advantages to testing our own designs that probably outweigh the potential biases we may bring. As designers, we have concerns, questions, or reservations about specific aspects of our designs, and we design our tests around those. We are looking with a very focused eye to learn about those aspects. An independent tester will not be so sensitive to those aspects. I realize this can introduce bias or at least risk missing other things we are not looking for. But I agree with [some of the other commenters] suggesting that, for many of us, we really are pursuing the truth and looking for the problems in our designs rather than simply trying to confirm them. I guess, as with many things, it depends on the integrity of the practitioner, but personally, I would always rather test my own designs than leave it in the hands of others who are not intimately familiar with what I am trying to find out about my design.”

I wouldn’t personally go as far as Deborah did when she said she would always rather test her own designs. But her point about the integrity of the designer really resonated with me and here’s why: When we design user interfaces, we enter into an implicit agreement that everyone who has designed a user interface has entered into—whether consciously or not. We are essentially agreeing to try to solve the design problem to the best of our ability.

The object we’re designing could be a Web form, an electric toothbrush, or a suspension bridge. It doesn’t matter. When we design, we are in effect agreeing to honor the craft of design. And honoring the craft of design, indeed, takes integrity. What separates craft from art—an inherently subjective endeavor—is that, in almost all instances, you can assess it objectively. (Although, to be fair, it’s not always clear what measures to use when assessing a design’s goodness of fit to a design problem—or even what the problem is, for that matter.)

All of this brings me to a third guideline, which is this:

Guideline 3—When you’re trying to solve a design problem, usability testing serves design. It’s a tool. Use it to improve your design, not to justify your actions. 

Founder and Principal Consultant at ShermanUX

Assistant Professor and Coordinator for the Masters of Science in User Experience Design Program at Kent State University

Cleveland, Ohio, USA

Paul J. ShermanShermanUX provides a range of services, including research, design, evaluation, UX strategy, training, and rapid contextual innovation. Paul has worked in the field of usability and user-centered design for the past 13 years. He was most recently Senior Director of User-Centered Design at Sage Software in Atlanta, Georgia, where he led efforts to redesign the user interface and improve the overall customer experience of Peachtree Accounting and several other business management applications. While at Sage, Paul designed and implemented a customer-centric contextual innovation program that sought to identify new product and service opportunities by observing small businesses in the wild. Paul also led his team’s effort to modernize and bring consistency to Sage North America product user interfaces on both the desktop and the Web. In the 1990s, Paul was a Member of Technical Staff at Lucent Technologies in New Jersey, where he led the development of cross-product user interface standards for telecommunications management applications. As a consultant, Paul has conducted usability testing and user interface design for banking, accounting, and tax preparation applications, Web applications for financial planning and portfolio management, and ecommerce Web sites. In 1997, Paul received his PhD from the University of Texas at Austin. His research focused on how pilots’ use of computers and automated systems on the flight deck affects their individual and team performance. Paul is Past President of the Usability Professionals’ Association, was the founding President of the UPA Dallas/Fort Worth chapter, and currently serves on the UPA Board of Directors and Executive Committee. Paul was Editor and contributed several chapters for the book Usability Success Stories: How Organizations Improve by Making Easier-to-Use Software and Web Sites, which Gower published in October 2006. He has presented at conferences in North America, Asia, Europe, and South America.  Read More

Other Columns by Paul J. Sherman

Other Articles on Usability Testing

New on UXmatters