Testing Your Own Designs Redux

By Paul J. Sherman

Published: December 21, 2009

“I took the position that it was entirely possible for designers to test their own designs—with one catch: confirmatory bias would make us less likely to completely discard our faulty designs and start afresh.”

My last column, “Usability Testing Your Own Designs: Bad Idea?” engendered a lot of discussion with UXmatters readers about some related issues that crop up for many designers throughout their careers:

  • Is it possible to both design and test the usability of our own designs effectively?
  • If so, how can we test our own designs well?
  • If not, then what?

In that column, I took the position that it was entirely possible for designers to test their own designs—with one catch: confirmatory bias would make us less likely to completely discard our faulty designs and start afresh. Then, I provided some guidelines for testing our own designs. When people read the column, it became clear that they had other ideas. This column is an attempt to synthesize a new set of guidelines for testing your own designs that I’ve based on the best of my own and UXmatters readers’ ideas.

Revisiting Guidelines for Testing Your Own Designs

Guideline 1—When testing your own designs, don’t think of it as a test to pass or fail, think of it as part of your design process.

Whitney Quesenbery aptly recommends viewing usability testing “…not just as a way of seeing if your design works, but as part of the design process. That’s one of the notions behind iterative design. It’s not just that you create design, then test it, then change it, and so on.”

“Usability testing should not be a stage gate in your design and development process. It should be a tool with which to gather helpful, diagnostic information from your target users.”

This is excellent advice. Usability testing should not be a stage gate in your design and development process. It should be a tool with which to gather helpful, diagnostic information from your target users. It’s a means of understanding the goodness of a design’s fit to the intended users’ problems. Put another way—if you’re properly incorporating user performance feedback into your ideation, design, and development process, you should never hear the phrase my design failed usability testing. Which leads me to a corollary for Guideline 1:

Guideline 1a—Test early, test as often as possible, and test lo-fi prototypes rather than making usability testing a make-or-break event in your design lifecycle.

Others’ comments on my absolutist stance that designers should always focus on the negative aspects of their own designs drove the emergence of Guideline 2:

Guideline 2—When testing your own designs, you should seek disconfirming evidence, but be alert for joys and delighters, too.

“Focusing on negative feedback doesn’t help you get to a better design solution, especially if the better solution is one you’ve already considered and rejected.”—Susan Hura

In my last column, I took the position that the invidious effect of confirmatory bias necessitates our always concentrating on what is wrong with our designs. I admit that I oversold the whole confirmatory bias idea—and as a result, I backed myself into a rhetorical corner. Daniel Szuc was kind enough to throw me a rope and pull me out of that corner when he said, “One could be overly negative about just about anything, but the feedback would not necessarily help the design. Just as one could be overly positive, too.”

Susan Hura took a similar position, saying, “I like Paul’s suggestion of orienting yourself toward negative feedback when you test your own designs, but I think it closes you off to some valuable information in usability testing. Focusing on negative feedback doesn’t help you get to a better design solution, especially if the better solution is one you’ve already considered and rejected.”

Dan and Susan are right. A balanced approach is always better, whether it’s you or someone else testing your design. However, this fact remains: People are pretty much hardwired to give preference to evidence that confirms their assertions, attitudes, and belief systems. A design that you’ve created arises from your belief system and represents a value statement that you’ve rendered into a tangible, manipulable form. It’s your baby. So how do you defend against confirmatory bias when it’s your baby that you’re holding up for criticism?

“A balanced approach is always better, whether it’s you or someone else testing your design.”

I don’t have a great answer to that question. It’s extremely difficult for people to be objective and balanced in their criticism of their own designs—especially if you consider that their boss or client is also evaluating them on the quality of their work. As Bob Dylan said, you’ve gotta serve somebody. (I’m not a big Dylan fan, but this just seemed to fit here.)

Indeed, an anonymous commenter on my last column thought I hadn’t gone far enough in describing the pitfalls of testing our own designs, citing organizational and political factors that could affect our ability to test our own designs effectively and impartially, as well as various tricks designers could pull to stack the deck, presenting their designs in the best possible light:

“… [It is] possible that the designer may well bias the plan and the script toward things that the designer has focused more heavily on, and therefore, things that the design is more likely to perform well on.”

“Taking a balanced approach…keeps you attuned to recognizing the delighters
—the Wow, cool! moments
—your design may engender.”

I think that’s a tough effect to defend against. Susan Hura recommends taking the Zen approach of a beginner’s mind when testing your own designs. I agree with her and want to add these key points: If you, as designer who is testing your own design, stay focused on the outcome—that is, a good design—instead of merely focusing on how your boss is going to perceive your initial design, and you’ve already done the work of selling management and your peers on the value of usability testing and iterative design, you’ll probably have a better-than-even chance of being able to effectively use usability evaluation techniques to assess your design. Plus, taking a balanced approach, as I’ve recommended in Guideline 2, keeps you attuned to recognizing the delighters—the Wow, cool! moments—your design may engender.

Deborah Mayhew wrote eloquently about the benefits of designers’ being more sensitive to specific concerns and questions: “Personally, I believe there are distinct advantages to testing our own designs that probably outweigh the potential biases we may bring. As designers, we have concerns, questions, or reservations about specific aspects of our designs, and we design our tests around those. We are looking with a very focused eye to learn about those aspects. An independent tester will not be so sensitive to those aspects. I realize this can introduce bias or at least risk missing other things we are not looking for. But I agree with [some of the other commenters] suggesting that, for many of us, we really are pursuing the truth and looking for the problems in our designs rather than simply trying to confirm them. I guess, as with many things, it depends on the integrity of the practitioner, but personally, I would always rather test my own designs than leave it in the hands of others who are not intimately familiar with what I am trying to find out about my design.”

“I believe there are distinct advantages to testing our own designs that probably outweigh the potential biases we may bring. As designers, we have concerns, questions, or reservations about specific aspects of our designs, and we design our tests around those.”
—Deborah Mayhew

I wouldn’t personally go as far as Deborah did when she said she would always rather test her own designs. But her point about the integrity of the designer really resonated with me and here’s why: When we design user interfaces, we enter into an implicit agreement that everyone who has designed a user interface has entered into—whether consciously or not. We are essentially agreeing to try to solve the design problem to the best of our ability.

The object we’re designing could be a Web form, an electric toothbrush, or a suspension bridge. It doesn’t matter. When we design, we are in effect agreeing to honor the craft of design. And honoring the craft of design, indeed, takes integrity. What separates craft from art—an inherently subjective endeavor—is that, in almost all instances, you can assess it objectively. (Although, to be fair, it’s not always clear what measures to use when assessing a design’s goodness of fit to a design problem—or even what the problem is, for that matter.)

All of this brings me to a third guideline, which is this:

Guideline 3—When you’re trying to solve a design problem, usability testing serves design. It’s a tool. Use it to improve your design, not to justify your actions.

Join the Discussion

Asterisks (*) indicate required information.