Top

Modifying Your Usability Testing Methods to Get Early-Stage Design Feedback

Research That Works

Innovative approaches to research that informs design

A column by Michael Hawley
July 24, 2012

When you’re designing something new, it’s desirable to seek feedback on your design direction from potential users early in the design lifecycle. To elicit this feedback, you may set up sessions that look a lot like qualitative usability tests: one-on-one sessions with a moderator, in which participants work their way through a series of tasks using design artifacts. However, many traditional elements of usability testing protocols were originally developed as a means of discovering usability problems such as confusing labels and poorly placed buttons.

In the very early stages of creating a new design, our priority is not generally finding usability problems per se, but rather answering high-level questions about requirements, users’ preferences for alternative design approaches, or the overall viability of a proposed design. So, if your goal is not identifying usability problems, is usability testing a valid approach to eliciting early-stage feedback? How should you adjust your approach to get feedback that informs your overall design direction and inspires innovation?

Champion Advertisement
Continue Reading…

Early-Stage Design Questions

Many of the projects I’ve worked on recently have involved new applications or Web sites, and the project teams have had lots of questions early on. These questions typically relate to the overall usefulness of a design, an audience’s preference for one approach over another, a product’s potential for success, whether certain features resonate with an audience, and how well a design meets their expectations. The specific questions depend on the context of a design, but here are a few examples of such questions.

Application Experiences

  • Where can we best position productivity tips and Help buttons within the application to ensure their utilization?
  • What is the optimal level of personalization and customization that people would actually use?
  • Is there a market for this application?

Content Experiences

  • What content is missing that would help overcome people’s objections or answer their critical questions?
  • How do branded labeling and content themes contribute to the overall experience or detract from it?
  • What are the levels and types of promotions and interstitials that would be acceptable to the target audience?
  • How do different audience personas prefer to consume information about our domain?

Game Experiences

  • What are the optimal rates of point accumulation and alignment with prize levels?
  • What is the best use of social media within or around the game?
  • What is the audience’s tolerance for viewing ads and interstitials during game play and when registering to begin playing?

Social Experiences

  • What are the most compelling and appealing topics or categories for conversations?
  • What level of involvement should the sponsoring company have in the social experience, if any?
  • How should we balance branded and non-branded elements of the experience to engender optimal trust through the design?
  • What elements should support commenting and reviewing and what are their attributes?

Persuasive Experiences

  • What are the most persuasive elements that would compel the target audience to register?
  • What is the design’s impact on a user’s perception of the brand?
  • How do the positioning and messaging compare against the company’s competitors?
  • What content is missing that would help the target audience make an informed decision about a product?

But wait, you say. Aren’t these market research questions—or design questions that you should answer during up-front research before design actually starts? Perhaps.

In a perfect world, early market research or a full user-centered design (UCD) process might uncover the needs of the audience, the value of satisfying them, and functional requirements for features—whether through contextual inquiry, ethnography, or other UX research methods. But, in practice, I have found that there are several reasons such ideal situations don’t always play out. First, it may be difficult for participants to articulate what their needs are without their being able to react to some concept—that is, without having context. A design artifact gives participants something specific they can react to, triggering ideas. With a sketch or a low-fidelity prototype before them, participants can respond to your specific questions about your design direction. Second, being realistic, projects don’t always have the time or budget for foundational research. We sometimes need to quickly get into sketching and get immediate feedback. Finally, I find that a business and your project team often need something to react to as much as research participants do.

Reinventing the Classic Usability Test

In my work, I have found that the core elements of usability testing are appropriate for getting early-stage feedback. Specifically, engaging participants in one-on-one sessions, asking them pre-task questions to gauge their background, observing participants and asking them probing questions as they attempt to use your design artifact to accomplish tasks, and summarizing the experience at the end.

The differences when your goal is to obtain early-stage feedback are

  • how you tailor the questions you ask to elicit high-level feedback
  • how you adjust your approach to devising tasks to enable you to better understand a product’s usefulness rather than identify usability problems

Pre-Task Questions

In many usability studies, before the task portion of a test session begins, moderators ask participants questions about their demographics, their experience with technology, and other products they use. The goal is to help classify participants and identify trends in the usability issues you discover according to certain user types.

However, for early-stage design feedback, I find it helpful to extend the questioning to include questions that get participants in the right frame of mind to evaluate the overall usefulness or value of a design. For example, I might ask participants about elements of their daily workflow, pain points that they must try to overcome, or recent experiences that they found frustrating. By articulating their current situation verbally, participants are not only helping me with early discovery, but also bringing to the front of their mind the issues that are most relevant to them. I remind them to think about those issues as they evaluate the design during the task portion of the session.?Consequently, as they go through tasks using the design artifacts, they don’t just comment on usability issues in the proposed design, but are more likely to comment on how useful and valuable the design would be to them.

Tasks

Defining good tasks for usability tests is essential to discovering usability problems. Predefined tasks help a researcher to focus on specific aspects of a design that are in question. As participants proceed through tasks during test sessions, a moderator observes them, noting areas of confusion or hesitation, measuring time on task and completion percentage, and so on.

During the early stage of creating a new design, a project team may think they know the key tasks a design should enable users to accomplish. But there may also be unconfirmed assumptions about the priority of tasks or missing requirements. For early-stage assessment, I build on the requirements I’ve gathered when asking pre-task questions by placing more emphasis on participant-directed tasks than predefined tasks. I start the task portion of the study by asking participants what they might want or need to do with the proposed design. The goal is to understand whether the proposed design aligns with user needs and expectations. Even if the design artifact or prototype does not support exactly what they want to do, I can learn from their intentions and explorations.

Post-Task Assessments

Once participants have interacted with a design through a series of tasks during a usability test, it is common to ask them: “On a scale of 1 to 5, where 1 is extremely easy and 5 is extremely difficult, how would you rate your experience of that task?” This line of questioning is helpful in assessing the usability of the proposed design and identifying points of confusion or ambiguity that need iterative refinement and improvement. Even if you don’t report the scores of such questions, just asking them can be helpful in eliciting further descriptions of why they scored an experience the way they did. But again, these types of questions focus on the usability of the proposed design.

For early-stage feedback, I instead ask participants to think about their original expectations and actual needs and how their experience of the design artifact compares to them. By tailoring my questions in this manner, I am directing participants to think about the overall efficacy of a design and its opportunity to deliver value. Of course, some participants may not be able to articulate this comparison very well. But through this line of questioning, I am more likely to get participants to comment on high-level, strategic design considerations and gets the answers I need.

Conclusion

For UX professionals, usability testing is one of the best tools we have in our toolkit for improving user interface designs. There is no substitute for watching people try to use a design to learn what is working and what it is not. However, not all usability tests are the same. If you have the opportunity to get feedback early in the design process, think about how you can modify traditional qualitative protocols to get feedback that answers your high-level, strategic questions and inspires your design team—rather than just identifying usability problems. 

Chief Design Officer at Mad*Pow Media Solutions LLC

Adjunct Professor at Bentley University

Boston, Massachusetts, USA

Michael HawleyAs Chief Design Officer at Mad*Pow, Mike brings deep expertise in user experience research, usability, and design to Mad*Pow clients, providing tremendous customer value. Prior to joining Mad*Pow, Mike served as Usability Project Manager for Staples, Inc., in Framingham, Massachusetts. He led their design projects for customer-facing materials, including e-commerce and Web sites, marketing communications, and print materials. Previously, Mike worked at the Bentley College Design and Usability Center as a Usability Research Consultant. He was responsible for planning, executing, and analyzing the user experience for corporate clients. At Avitage, he served as the lead designer and developer for an online Webcast application. Mike received an M.S. in Human Factors in Information Design from Bentley College McCallum Graduate School of Business in Waltham, Massachusetts, and has more than 13 years of usability experience.  Read More

Other Columns by Michael Hawley

Other Articles on User Research

New on UXmatters