Top

Asking Probing Questions During a Usability Study

Ask UXmatters

Get expert answers

A column by Janet M. Six
May 18, 2015

In this edition of Ask UXmatters, our panel of UX experts discusses how to pose probing questions to participants in a usability study and get the answers you need, without leading them to a particular answer.

You are in the midst of a usability study and, when testing one of the tasks, you observe that participants do not seem to see a design element that would help them to finish their task. What do you do now? Do you ask them whether they see the element? Do you just continue and say nothing? Do you ask them later? Is it even possible for the moderator to avoid affecting the test results in some way?

Champion Advertisement
Continue Reading…

In this month’s edition of my column, Ask UXmatters, our panel of UX experts answers our readers’ questions about a broad range of user experience matters. To get answers to your own questions about UX strategy, design, user research, or any other topic of interest to UX professionals in an upcoming edition of Ask UXmatters, please send your questions to: [email protected].

The following experts have contributed answers to this edition of Ask UXmatters:

  • Carol Barnum—Director of User Research and Founding Partner at UX Firm; author of Usability Testing Essentials: Ready, Set … Test!
  • Dana Chisnell—Principal Consultant at UsabilityWorks; Co-author of Handbook of Usability Testing
  • Jordan Julien—Independent Experience Strategy Consultant
  • David Kozatch—Principal at DIG
  • Cory Lebson—Principal UX Consultant at Lebsontech; Past President, User Experience Professionals’ Association (UXPA); Author of the forthcoming book UX Careers Handbook
  • Gavin Lew—Executive Vice President of User Experience at GfK
  • Ritch Macefield—CEO of Ax-Stream

Q: When participants in a usability test do not seem to have seen something on the screen that would have helped them, how can you probe to find out whether they just didn’t see it or didn’t recognize it as something relevant, without leading them?—from a UXmatters reader

“The simple answer: you can’t!” exclaims Ritch. “There are two well-established principles at play here that all good usability engineers know.

“First, the experimenter effect states that, as soon as we test something, we unavoidably change what we are testing. Of course, a usability study inevitably falls into this category. All we can do is take steps to minimize this effect. Asking participants whether they’ve noticed something has the potential to make a huge change in their thinking, so you should avoid doing so. Further, suppose someone has hypothesized that they could somehow extract this information from participants without leading them. How could we prove this to be the case? If we could not, any data that we extracted in this way would have no validity.

“Second, the myth that users even know their own mind—let alone are able to articulate it under the pressure of usability testing—is a myth that all of the usability gurus who came to human-computer interaction (HCI) from a psychology background busted decades ago. Most famously, the legendary Johnson-Laird debunked this myth when he coined the term fallacy of conscious access. When users operate user interfaces, there is simply way too much going on unconsciously to be able to determine the causal effects regarding exactly how a user failed or succeeded with a task through verbal protocols. To my knowledge, the many attempts to do this have all failed!

“So, for now, when studying task efficacy and efficiency, we should focus on what users do or don’t do—not on what they say! What users say has a role only in determining the satisfaction element of usability.

“I do, however, want to share a tactic that may be useful here,” continues Ritch. “When a participant apparently fails to notice that big, red Book Now button at the bottom of the form and tells me he can’t complete the test task, I then say: ‘Sure, that’s fine. I can fully understand that. But suppose now that you were at your desk, and your boss rings you from the airport. She tells you that she really needs that hotel booking made, or she’ll have nowhere to sleep tonight. Now, what would you do?’ It’s amazing how many people immediately seem to notice and click that button! Of course, this raises all sorts of questions about usability studies, the data they produce, and the nature of human behavior in general. I’ll leave you to draw your own conclusions!”

Living Usability Testing

“I’m a big fan of something I’ve been calling living usability testing,” replies Jordan. “It’s an iterative approach to usability testing that involves learning, then adapting a prototype based on prior test sessions. Using this approach, it often takes longer to complete the testing, but it allows you to test theories along the way. I’ve always been suspicious of usability tests that extract insights based on a very small sample size. So I’ve focused most of my usability testing on optimizing scenario success rates rather than identifying the reasons that painpoints exist. In other words, I recommend that you don’t worry too much about why a participant didn’t interact with a page element. Instead, worry about how to ensure that the next participant will interact with it.”

Move On, Then Return to the Issue During the Post-Test Interview

“Leave the page,” recommends David. “Then, once you’ve completed the non-assisted portion of the session, go back to the page during the post-test interview and ask the participant, ‘Do you recognize any other functions on this page that might have helped you to do xyz?’

“If the participant still doesn’t see it, point it out and ask why he thinks he might have missed or purposely chosen not to use that function. Was it because of its size, color, placement, label, or other factors? If you were performing eyetracking, the moderator should take note of whether the participant’s gaze fell upon that function. Then mention this as part of the context for your question—for example, ‘I noticed that your gaze fell upon this part of the page. Were you looking, but not seeing? Why?’”

Ask Questions

“Stephanie Rosenbaum of TecEd taught me a technique called graduated prompting,” answers Dana. “First, tell the person just to try again. If the participant still says he can’t find it, you can point out that he’s been focusing on a specific area and might look around. Finally, you might tell him specifically and follow up with a question about it—for example, ‘What might have helped you to find or notice that?’ or ‘What were you looking for that we didn’t show you?’”

“I try not to influence the process while a participant is engaged in a task,” responds Carol. “But once the participant is finished with the task—or has given up—I’ll point to the object on the screen and suggest that clicking it would likely have gotten him closer to his goal. When I point it out to the participant, I usually get one of two responses: ‘I didn’t see that before!’ or ‘I saw that, but I didn’t think it was the correct choice.’ In either case, I ask the participant to elaborate on his response to help me to understand it and, thus, be better able to inform the design team. The typical explanation for ‘I didn’t see it’ is that it was in the ‘wrong place.’ Typical explanations for ‘I saw it, but didn’t think it was the correct choice’ usually center around the object not looking clickable or the name, label, or design of the object not providing the proper scent of information. That is, the participant didn’t think it would get him closer to his goal. In either case, asking the participant for this information after he has completed a task sheds light on his thinking, without influencing him during his problem-solving process.”

“It depends on the situation,” answers Cory. “Ideally, this can wait until the end of the session, when I’d either bring it up during the debriefing period or take the participant back to the appropriate screen and ask him to re-enact what he did. In the latter case, if the participant again misses the thing that I want to know about, I can now ask about it immediately. However, if I don’t think that I can wait to ask until the end, and I’m sure that asking will not bias the rest of the activities, I’ll ask at an earlier point. Regardless of when I ask, however, I know that participants are not always fully aware of what they’ve done or did not notice, so once I draw their attention to a particular part of the screen, they may think that they had noticed it, even though they did not.”

Gain Insight into the User’s Mental Model

“As researchers, our primary goal is to gather unbiased insights that lead us to a better understanding of the user’s experience,” replies Gavin. “One of the key challenges in good UX research is our ability to write really good questions that are not simply matching exercises—that is, when the user hears a specific word and tries to find it in the user interface

“But to answer the reader’s question, when participants fail to complete a task during usability test sessions, the technique we use to extract their mental model of the experience is analogous to a dentist’s pulling teeth—the observers in the room feel uncomfortable watching the experience. A good moderator will simply ask, ‘What would you do next?’ Even in a frustrating situation, this can provide insight into the participants’ mental model. For example, you might hear, ‘Well. I would’ve thought I should go here, but I couldn’t find anything, and now I want to go here, but that’s not it either. Sigh… So I just can’t believe it’s here.’

“Our goal is to understand the user’s mental model so we can see where a mismatch exists between the desired experience and the actual experience. Through this process of pulling teeth, the participant will tell you about everything he saw that might be relevant, so you don’t have to point anything out.

“Of course, there are cases when, after this entire process, the participant still doesn’t see anything that remotely looks like the answer,” continues Gavin. “In such a case, you know the task was a failure, and no one can argue that, if we had just given him more time, he would naturally have found it. To address this case, I often coach moderators to limit the words they use when they’re asking probing questions, because they are thinking on their feet. When thinking on your feet, you could accidentally bias a user by using a word that matches a target in the user interface. So I council moderators to instead just point to an object rather than describing it and simply say, ‘Tell me about this,’ then pause. Participants are then able to describe what that object means to them.” 

Product Manager at Tom Sawyer Software

Dallas/Fort Worth, Texas, USA

Janet M. SixDr. Janet M. Six helps companies design easier-to-use products within their financial, time, and technical constraints. For her research in information visualization, Janet was awarded the University of Texas at Dallas Jonsson School of Engineering Computer Science Dissertation of the Year Award. She was also awarded the prestigious IEEE Dallas Section 2003 Outstanding Young Engineer Award. Her work has appeared in the Journal of Graph Algorithms and Applications and the Kluwer International Series in Engineering and Computer Science. The proceedings of conferences on Graph Drawing, Information Visualization, and Algorithm Engineering and Experiments have also included the results of her research.  Read More

Other Columns by Janet M. Six

Other Articles on Usability Testing

New on UXmatters