In this month’s edition of my column, Ask UXmatters, our panel of UX experts answers our readers’ questions about a broad range of user experience matters. To get answers to your own questions about UX strategy, design, user research, or any other topic of interest to UX professionals in an upcoming edition of Ask UXmatters, please send your questions to: [email protected].
The following experts have contributed answers to this edition of Ask UXmatters:
- Carol Barnum—Director of User Research and Founding Partner at UX Firm; author of Usability Testing Essentials: Ready, Set … Test!
- Dana Chisnell—Principal Consultant at UsabilityWorks; Co-author of Handbook of Usability Testing
- Jordan Julien—Independent Experience Strategy Consultant
- David Kozatch—Principal at DIG
- Cory Lebson—Principal UX Consultant at Lebsontech; Past President, User Experience Professionals’ Association (UXPA); Author of the forthcoming book UX Careers Handbook
- Gavin Lew—Executive Vice President of User Experience at GfK
- Ritch Macefield—CEO of Ax-Stream
Q: When participants in a usability test do not seem to have seen something on the screen that would have helped them, how can you probe to find out whether they just didn’t see it or didn’t recognize it as something relevant, without leading them?—from a UXmatters reader
“The simple answer: you can’t!” exclaims Ritch. “There are two well-established principles at play here that all good usability engineers know.
“First, the experimenter effect states that, as soon as we test something, we unavoidably change what we are testing. Of course, a usability study inevitably falls into this category. All we can do is take steps to minimize this effect. Asking participants whether they’ve noticed something has the potential to make a huge change in their thinking, so you should avoid doing so. Further, suppose someone has hypothesized that they could somehow extract this information from participants without leading them. How could we prove this to be the case? If we could not, any data that we extracted in this way would have no validity.
“Second, the myth that users even know their own mind—let alone are able to articulate it under the pressure of usability testing—is a myth that all of the usability gurus who came to human-computer interaction (HCI) from a psychology background busted decades ago. Most famously, the legendary Johnson-Laird debunked this myth when he coined the term fallacy of conscious access. When users operate user interfaces, there is simply way too much going on unconsciously to be able to determine the causal effects regarding exactly how a user failed or succeeded with a task through verbal protocols. To my knowledge, the many attempts to do this have all failed!
“So, for now, when studying task efficacy and efficiency, we should focus on what users do or don’t do—not on what they say! What users say has a role only in determining the satisfaction element of usability.
“I do, however, want to share a tactic that may be useful here,” continues Ritch. “When a participant apparently fails to notice that big, red Book Now button at the bottom of the form and tells me he can’t complete the test task, I then say: ‘Sure, that’s fine. I can fully understand that. But suppose now that you were at your desk, and your boss rings you from the airport. She tells you that she really needs that hotel booking made, or she’ll have nowhere to sleep tonight. Now, what would you do?’ It’s amazing how many people immediately seem to notice and click that button! Of course, this raises all sorts of questions about usability studies, the data they produce, and the nature of human behavior in general. I’ll leave you to draw your own conclusions!”