I recently conducted some user research on a proposed experience for a Help and value-added learning center for a Web application. The goals of the study were as follows:
Assess how well our proposed designs would align with user needs.
Understand how the new branding for the section would impact the user experience.
Understand how well a proposed conceptual approach to information categorization would support information seeking.
The setup for this study was similar to that for any typical usability study. We invited people to participate in one-on-one sessions with a moderator and asked participants to complete a series of tasks while using the think-aloud protocol. Project team members, including designers and business sponsors, watched from another room.
We wanted to gain the best possible understanding of the entirety of the proposed user experience, including branded words for labels, information architecture, and categorization. Therefore, during the course of the sessions, I asked participants to describe what they expected to see in a section or on a page behind a link before they clicked it. I thought this would help me to understand the users’ mindsets coming into the experience.
But at the end of one day of testing, a business sponsor and observer asked me about my line of questioning: “What do you expect…?” She thought it was odd that I had asked that question, because that’s not how people think when browsing or using the Web in real life. She also suggested that my asking that question could potentially introduce bias into the study, because participants would stop to really think about what they might expect. Although I was happy with the information we were getting from participants, her comments did make me wonder: should I make the testing more real?
If the goals of the study were different, I would likely agree that interrupting users during tasks might not lead to the best results. For summative testing, where the goal is to identify usability issues that would prevent users from completing tasks or to develop a benchmark for the efficiency of a site, I limit interruptions and deep, probing questions and focus mostly on watching behavior as participants complete tasks. I leverage the think-aloud protocol for simple explanations of that behavior.
However, even though I might minimize probing questions during a task, I don’t strive to make the situation more real from a context perspective. For summative testing, attempting to simulate a real scenario is not that critical. Rather, it is more important to recruit participants who match the profile of the target audience and track how those users accomplish defined tasks.
When developing the scenarios that often accompany tasks in a usability test, moderators sometimes draft elaborate scenarios to help get participants in the right frame of mind for each task. But, if we’ve recruited the right participants and our goal is to find usability problems with a given interaction, we should be able to simply ask participants to complete the task and observe their behavior.
A scenario and the background information participants use in completing a task should be the minimum necessary to get the task done. A mentor of mine once commented on the desired level of reality for this type of summative testing. He noted that we bring participants into an unrealistic environment—a usability lab—have moderators watching over their shoulder, and promise them money for their cooperation. Why would we expect that we can make the situation seem real? Better to simply ask participants to complete a task and observe where there are issues
The usability testing I was doing for the learning center project I described earlier was formative testing. By this, I mean usability testing that occurs early in the product design process, whose goal is not primarily to find usability problems that need to be fixed, but to assess the overall user experience and understand users’ reactions to different ideas. Often, this includes comparative testing—that is, soliciting feedback on multiple solutions to a design problem. When done early in the design process, formative testing informs decisions about our design direction and general approach to interaction design and information architecture solutions.
During formative testing, while I am interested in getting participants’ genuine responses to the designs and the questions I ask them, I’m not necessarily looking to make the testing situation real—as the business sponsor on my project suggested. Rather, I am interested in using any technique or line of questioning at my disposal to elicit information about the target audience and gain insights that will inform design decisions.
By asking a question such as “What do you expect to happen when you click…?,” I obtain valuable information about the mindsets of potential users and their reactions to certain labels or design artifacts. Of course, to ensure that my perceptions don’t get skewed by one or two individuals, it’s important that I talk with multiple participants. When it’s done effectively, interrupting participants to ask probing questions or having in-depth discussions with them can lead to insights that allow us to be truly empathetic designers.
A Dose of Reality
Although I’ve stated that there is only a limited need to create realistic environments for participants during summative or formative usability testing, I do realize that there is room for a dose of reality.
First, the reaction of our project’s business sponsor was real. I was happy that she had observed the usability test sessions. I find it incredibly helpful when other team members observe test sessions, because it gives them an eye-opening experience and establishes a foundation and common ground for the whole project team to build on. If she were skeptical about my testing methodology, it would have been in my best interest to adjust the protocol so she would buy into the process—even if I had academic justifications for why it was okay to interrupt tasks with probing questions about participants’ expectations.
Second, there is always a danger of introducing bias during any line of questioning. Even the most experienced moderator can never run a perfectly unbiased session. With this in mind, it is always worthwhile to observe participants’ unfiltered, uninterrupted experience for at least some part of a session.
Finally, in circumstances where motive and incentive are important, providing participants with a realistic scenario can actually serve to minimize the bias of the artificial environment that exists during a usability test. For example, when studying an ecommerce environment, it’s good practice to incent participants by letting them choose their own merchandise during a shopping experience.
For the learning center project I described earlier—as well as for most applications other than ecommerce applications—the best way I’ve found to introduce realistic situations into a usability study is to allow participants to perform unguided, self-motivated tasks at the outset of a session. Before introducing the set of predefined tasks I want to study, I ask participants to describe several things they would want to do with the product. Then, even if the product’s design or prototype does not support those tasks, I ask them to use the design to achieve those goals.
By introducing reality in this way, I am able to understand participants’ mindsets before they’ve used a design, develop a set of top-priority tasks that can inform decisions about visual hierarchy and information architecture, and get an unbiased understanding of participants’ first reactions to a product’s overall design or a specific interaction. With this initial user experience as a foundation, I can then turn to the specific tasks I want to study and probe more deeply for participants’ specific expectations and reactions.
It’s clear that a moderator’s interrupting participants’ tasks with probing questions such as “What do you expect…?” during usability studies can make participants’ experience seem a bit unrealistic. However, from a UX designer’s perspective, this may be acceptable if the goal is to develop insights that inform the overall design approaches for a product. During formative testing, a moderator can balance the need to probe on expectations with the desire to get more natural responses by introducing participant-driven tasks at the beginning of a usability session. It’s an easy way to get the best of both worlds.
Chief Design Officer at Mad*Pow Media Solutions LLC
Adjunct Professor at Bentley University
Boston, Massachusetts, USA
As Chief Design Officer at Mad*Pow, Mike brings deep expertise in user experience research, usability, and design to Mad*Pow clients, providing tremendous customer value. Prior to joining Mad*Pow, Mike served as Usability Project Manager for Staples, Inc., in Framingham, Massachusetts. He led their design projects for customer-facing materials, including e-commerce and Web sites, marketing communications, and print materials. Previously, Mike worked at the Bentley College Design and Usability Center as a Usability Research Consultant. He was responsible for planning, executing, and analyzing the user experience for corporate clients. At Avitage, he served as the lead designer and developer for an online Webcast application. Mike received an M.S. in Human Factors in Information Design from Bentley College McCallum Graduate School of Business in Waltham, Massachusetts, and has more than 13 years of usability experience. Read More